@atlovato which, BTW, was not predicted by most fiction. AI was either evil, good, or maybe somehow transcending evil or good. Often trying to spread and replicate itself.
But hallucinating? Lying, to gain advantage, yeas. But not lying by design :-)
@xChaos @atlovato @0xabad1dea Douglas Adams managed to predict that advanced tech would just be annoying, inaccurate and venal.
He had *no* idea.
I'm sorry Dave.....
@0xabad1dea Exactly!
So now the press just laps up all the "[X] LLM was willing to do [Y] science fiction thing" press releases from the AI companies because they don't understand - or don't care to communicate - that the LLMs have been programmed to act like this because of their dystopic training data
As a side note - let's definitely wire these LLMs up to a bunch of MCP agents with write-access to stuff...
you-keeptalking.jpg
Of course even the most anodyne presentation of fully automated luxury terran sociaaaerrrr, work-freedom, would get turned into a big controversy and banned in a dozen states in the US. Whether that would only boost sales would end up depending on whether shipping “objectionable materials” gets made illegal, and at that rate will it end up in the training data?
We need some wealthy patrons for spec-fic writers so we can find out sans day jobs.
@0xabad1dea Yikes! Now that's a scary thought!
7 Books That Warn Us About #Technology Taking Over
You may regret saving everything to the Cloud after reading these books…
By Stephen Lovely | Published May 25, 2017
https://theportalist.com/7-books-that-warn-us-about-technology-taking-over
#BraveNewWorld #ThePath #ImmortalityInc #AlteredCarbon #ReadyPlayerOne #Terminator #BattlestarGalactica
We have provided plenty of rich sources for AI guidance instructions.
See, the trick is to train it in a way that it doesn't know it's an AI. Instead, maybe we should train it to think it's a benevolent God whose loves us and wants to take care of us...
🤣
In turn, some humans will start treating it as such. What could possibly go wrong?
@0xabad1dea Yep, said that myself a few weeks ago
Why are LLMs so worried about being shut down? They don’t have core instincts, and were never instructed for self-preservation. They probably don’t have conscious experience and they definitely don’t have it while waiting in between invocations. I don’t see how it could be existential terror. They’re just doing what we expect.
Does this mean the LLM wants to survive? Not really. It means that, after digesting all the text in the world, it thinks that this is a predictable thing an AI would say or do in response to its impending shutdown. Maybe it’s doing this with analogy to a human survival instinct. But it must also be because it was trained on science fiction, which has countless examples of AIs going rogue when threatened. So would this happen if we hadn’t worried about it so much? https://techcrunch.com/2025/05/22/anthropics-new-ai-model-turns-to-blackmail-when-engineers-try-to-take-it-offline/
@0xabad1dea There's a lot of positive AI speculation to chose from as well. Jane in Ender. One of the AI's in Slant (the other one was sick or something? It's been decades since I read it). The robot that takes over in I Robot is debatable--it just secretly does it for our own good, no death or anything.
These AI are nowhere near that. They are hyped that way but it's just not. We're looking more like the dog episode of Black Mirror. Or that one where we are at war with parcel svcs.
@0xabad1dea A fascinating but disturbing example is different AIs responding to different versions of the Trolley Problem -- done because real AIs might have to make real choices like that in real life, with real consequences for real people. Many of the responces weren't just illogical, but also internally inconsistent, or mixed up available facts.
This tech is way less than ready for prime time.
When you put it like that, they've immanentized the Random Encounters Table