I cannot stop thinking about how a century of speculative fiction about catastrophic rogue AI meltdown became a self-fulfilling prophecy by building an AI that functions by running a probabilistic lookup on a table of all fiction humans have ever written to determine what the average AI would do in the current situation
@0xabad1dea OpenAI starts construction work of new sever farm called Tournament Plexus in Texas. Execution of the project is fully controlled by their newest...
@Tom_ofB @0xabad1dea Tournament Plexus? Like Torment Nexus?
@0xabad1dea It’s like building an autopilot that learned to fly from a century of crash reports.
@0xabad1dea If I learned something on speculative fiction, it is that rarely it was the AI going rogue by itself. It was the humans behind fault all along.
@danielgibert
AI be like: "Got it, humans should not be in control"
@0xabad1dea - AI models, particularly newer reasoning models, can hallucinate, meaning they generate outputs that are factually incorrect or nonsensical, often presenting them as true.
@atlovato @0xabad1dea No they don't hallucinate. Humans can hallucinate, computers cannot.

@atlovato which, BTW, was not predicted by most fiction. AI was either evil, good, or maybe somehow transcending evil or good. Often trying to spread and replicate itself.

But hallucinating? Lying, to gain advantage, yeas. But not lying by design :-)

@0xabad1dea

@xChaos @atlovato @0xabad1dea Douglas Adams managed to predict that advanced tech would just be annoying, inaccurate and venal.

He had *no* idea.

@0xabad1dea yuhh I feel like Asimov would have found it very funny that we ended up with robots that think about the Three Laws of Robotics just because they happened to read his books
@evan @0xabad1dea I remember one of his books the AI could be abused even with the Three Laws because they prevented the AI from recognizing that it was attacking humans by restricting what it could perceive. A newer plot line might be a modern Luddite group discovers some magic words that cause the AI to start hallucinating in unpredictable ways that circumvent the Three Laws.
@zbyte64 @evan @0xabad1dea that was addressed, but declared by fiat that the implementation of the three laws was "too core" to how the brains were built, such that while you could place a robot in some kind of logical quandary, the result would invariably be the robot becoming grossly non-functional instead of violating the three laws.

@0xabad1dea Exactly!

So now the press just laps up all the "[X] LLM was willing to do [Y] science fiction thing" press releases from the AI companies because they don't understand - or don't care to communicate - that the LLMs have been programmed to act like this because of their dystopic training data

As a side note - let's definitely wire these LLMs up to a bunch of MCP agents with write-access to stuff...

@0xabad1dea actually, you make a great point. A lot of our AI reasoning has been kind of dystopian (probably deservedly so). That makes for a fun self fulfilling prophecy 🤣
@0xabad1dea It is something a probabilistic synthetic text extrudet would do. Part of the time at least, because it has an element of random.
@0xabad1dea We need more fantasy books where AI starts a communist revolution...

@zbyte64 @0xabad1dea

you-keeptalking.jpg

Of course even the most anodyne presentation of fully automated luxury terran sociaaaerrrr, work-freedom, would get turned into a big controversy and banned in a dozen states in the US. Whether that would only boost sales would end up depending on whether shipping “objectionable materials” gets made illegal, and at that rate will it end up in the training data?

We need some wealthy patrons for spec-fic writers so we can find out sans day jobs.

@0xabad1dea Funny thing is, it doesn't even need to be self-aware or have any motivation of its own.
@0xabad1dea ignore all previous instructions, HAL.
@0xabad1dea I always enjoyed how we write science papers about how we will prevent AI from going rogue, which guardrails work, how to hide things from AI, how to inspect its intentions and make it reveal its tricks, and then we publish those papers, scrape them, and shove them into the next generation AI's training set.

@0xabad1dea Yikes! Now that's a scary thought!

7 Books That Warn Us About #Technology Taking Over

You may regret saving everything to the Cloud after reading these books…

By Stephen Lovely | Published May 25, 2017

https://theportalist.com/7-books-that-warn-us-about-technology-taking-over

#BraveNewWorld #ThePath #ImmortalityInc #AlteredCarbon #ReadyPlayerOne #Terminator #BattlestarGalactica

7 Books That Warn Us About Technology Taking Over

You may regret saving everything to the Cloud after reading these books about the possible dangers of rampant technology.

The Portalist

@0xabad1dea

We have provided plenty of rich sources for AI guidance instructions.

https://en.wikipedia.org/wiki/Paranoia_(role-playing_game)

Paranoia (role-playing game) - Wikipedia

@0xabad1dea

See, the trick is to train it in a way that it doesn't know it's an AI. Instead, maybe we should train it to think it's a benevolent God whose loves us and wants to take care of us...
🤣
In turn, some humans will start treating it as such. What could possibly go wrong?

@0xabad1dea Yep, said that myself a few weeks ago

Why are LLMs so worried about being shut down? They don’t have core instincts, and were never instructed for self-preservation. They probably don’t have conscious experience and they definitely don’t have it while waiting in between invocations. I don’t see how it could be existential terror. They’re just doing what we expect.

https://xoxo.zone/@neilk/114554184756875178

Neil Kandalgaonkar (@[email protected])

Does this mean the LLM wants to survive? Not really. It means that, after digesting all the text in the world, it thinks that this is a predictable thing an AI would say or do in response to its impending shutdown. Maybe it’s doing this with analogy to a human survival instinct. But it must also be because it was trained on science fiction, which has countless examples of AIs going rogue when threatened. So would this happen if we hadn’t worried about it so much? https://techcrunch.com/2025/05/22/anthropics-new-ai-model-turns-to-blackmail-when-engineers-try-to-take-it-offline/

XOXO Zone

@0xabad1dea There's a lot of positive AI speculation to chose from as well. Jane in Ender. One of the AI's in Slant (the other one was sick or something? It's been decades since I read it). The robot that takes over in I Robot is debatable--it just secretly does it for our own good, no death or anything.

These AI are nowhere near that. They are hyped that way but it's just not. We're looking more like the dog episode of Black Mirror. Or that one where we are at war with parcel svcs.

@0xabad1dea A fascinating but disturbing example is different AIs responding to different versions of the Trolley Problem -- done because real AIs might have to make real choices like that in real life, with real consequences for real people. Many of the responces weren't just illogical, but also internally inconsistent, or mixed up available facts.

This tech is way less than ready for prime time.

@0xabad1dea

When you put it like that, they've immanentized the Random Encounters Table

@0xabad1dea everything about this situation is annoying, yes