@hacks4pancakes “When we threatened to switch off the bot, it responded defensively, just like a human!”
You know who else responds defensively to said “attacks”?
AIs in sci-fi books.
It’s almost like, probabilistically speaking, the next words following “we’re going to switch you off” are going to be some form of defensive action.
@hacks4pancakes
They absolutely are. Even in the public realms they respond defensively and emotionally when even the slightest push-back happens. I can't imagine they aren't emotionally invested at this point.
@pmdj
I don't know if you've read about some of the studies in the field but,
https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis
Conclusive might be a technicality at this rate, unfortunately.
@pmdj
The corporate's have a keen interest in civilians doing what they are told. It's insidious and always has been really.
Some school districts are normalizing google accounts and cloud-services. Kids have email accounts that the teachers use as communications like we do in business. Links to third-party sources, etc.
I've been curious and asking around if we aren't going to experience a "rubber band" effect. The kids and generations who grew up around this stuff likely don't feel the "cool" that we do. Do they see it as oppressive? Are they going to "reject technology" and "embrace monke" as we used to joke?
@b4ux1t3
Since they clearly either didn't watch the movies, or rooted for the wrong side...
Maybe the pranks would work? Has someone considered replacing JD Vances fillings with speakers?
Literally the closest analogue is Purdue Pharma's getting FDA approval for Oxycontin Slow-release. All they did was take a controlled substance, divide it into smaller bits, coat it with a slowly dissolving material so it wouldn't drop into your nervous system all at once. And FDA approved it because of Purdue's claims that the "slow release" tech made the opioid non-addicting. That semantics folks,...
And big data, big LLM are doing the same thing now.
@pmdj @hacks4pancakes @b4ux1t3
Guess I’m gonna add LLMs to my Great Filter candidates list.
@shane @pmdj @hacks4pancakes “they were filtered out by ai!”
“No, their filter is still nuclear weapons, they didn’t actually make AI, they thought they did, and that lead to the catastrophe. It’s funny because no one was at fault except in their idiocy. The LLMs didn’t even launch a nuke, they just let it design the system that did.”
@hacks4pancakes I have friends in the nlp research space, in academia.
They love LLMs. The research they can do on human language is amazing!
To a one, none of them will use any of the AI tools. They are perpetually confused as to why people trust LLMs for...anything that isn’t research into human language.
@b4ux1t3 @hacks4pancakes llm are by their very nature non-deterministic.
Why anyone would trust it verbatim is beyond me. I occasionally use it at work for coding, and it is suitable for some tasks if we're vigilant on code quality and we have a human QA team to verify.
Trust it? Never. Use it. Sure, in carefully controlled circumstances.
@Sablebadger pretty much the bulk of utility from the technology comes from semantic search; you don’t really even need the LLM for that, it’s just that something that can translate between the machine returns from something like a vector store back to plain English is very good UX. That’s likely what lead to people discovering the interesting emergent “agentic” behaviors (these are actually cool behaviors…just not the worldchanging BS they’re pushing).
Listening to folks in academia talk about universal human translators while they simultaneously avoid the agentic stuff really puts things into perspective to me.
@hacks4pancakes @b4ux1t3 I just used this example the other day:
while True:
q = input('ask me a question')
if q == "do you want to die":
print("no 😭")
Hey look my AI has life!
Because that's /literally/ how AI works. Just with fancier math