Could we maybe stop talking about LLMs like they are becoming sentient? Amongst other stolen data, they have been trained on a whole bunch of text from science fiction stories about computers becoming sentient, and statistically know how to write that kind of thing.

You're not creating Skynet here, it's just seen The Terminator a hundred times and is quoting your favourite lines at you.

@jamesthomson Especially when you're the BBC and really should know better.

“So, this conscious AI, what does it think about when you're not asking it a question?”

@stuart I hadn't even seen that, sigh…
@jamesthomson Has anyone tried asking ChatGPT to program an ethical AI? 🤔
@marcintosh It has no concept of ethics either way, so I don't think it would do better than the humans…

@marcintosh @jamesthomson
“Any bloody machine goes and actually finds it and we’re straight out of a job, aren’t we? I mean, what’s the use of our sitting up half the night arguing that there may or may not be a God if this machine only goes and gives you his bleeding phone number the next morning?” [Majikthise]

“That’s right,” shouted Vroomfondel, “we demand rigidly defined areas of doubt and uncertainty!”
#DouglasAdams #FortyTwo #H2G2

@jamesthomson LLMs don't feel pity or remorse or fear, and they absolutely will not stop, ever, until you are dead.
@lapcatsoftware @jamesthomson sounds like definition of capitalism to me
@lapcatsoftware @jamesthomson why would they care if we exist at all? We can’t eliminate polio, how would be even fight AI? We aren’t a threat to them anymore then a goldfish is to its owner. If it gives us food and clean water we are going to go about our days without ever leaving the bowl. The real “AI War “ wouldn’t involve us. It would be different AI programs trying to outmaneuver each other in a digital environment. Everyone thinks they are so special.
@jamesthomson could we maybe stop talking about LLMs alltogether for a while 😭
@krzyzanowskim That would be nice also!
@jamesthomson speaking of Skynet, I saw this on Bluesky (posted by
‪ketanjoshi.co):
@jamesthomson Which is precisely why HAL can’t open the pod bay doors.
@jamesthomson Let’s start by telling people who use the term „AI” that it’s not an AI.

@jamesthomson

Not enough people are up on cold reading. LLMs cold read the internet and feed us BS and we're none the wiser

In a panic @jamesthomson tries to pull the plug. But Skynet fights back.
@jamesthomson It's pretty telling that the makers of LLMs "warn" us about them while selling them, they are trying to plant the idea that LLMs are sentient so people would try to use them to replace workers.
@FediThing @jamesthomson What’s very funny in the “this has some terrifying implications about your conception of humanity” kind of funny is when they tout how lifelike/amenable their LLM is and it’s the most chirpingly moronic dicksucker you’ve ever seen and you realize why these people simply cannot make or appreciate art.
@WhiteCatTamer @jamesthomson They don't know what art is, the humanity at the core of it.
@WhiteCatTamer @FediThing @jamesthomson Systems don't need to be sentient to be dangerous. As soon as we create autonomous agents that can use any kind of software on a computer like a human does and connect them to the Internet, they can cause all kinds of trouble. They don't need to be self-aware, they don't need to understand what they are doing. You give them some kind of goal or directive, something you'd like them to do, and then they'll just do all kinds of crazy shit because the machine created some weird narrative from the prompt and does whatever fits the narrative.
@LordCaramac @jamesthomson So just like malware on about any existing operating system without all the marketing horse sh*t.
@noworkie @jamesthomson Well, these models are more unpredictable, and they are capable of quite intelligent behaviour, just not near as intelligent as the snake oil salesmen would have us believe. It is like an upside down intelligence where the upper levels of cognition somehow exist, but there is no foundation of emitions and instincts underneath, no sense of being part of this universe and interacting with it. And actually, the kind of capabilities that we humans usually think of as signs of high intelligence often turn out to be quite easy to recreate with computers, like playing chess, solving equations, painting pictures or writing text. However, many of the things that any animal, like any mouse or even any beetle, can do easily are still hard to do with machines. We can do those things as well, of course, but we usually don't think of them as "intelligence" due to the common belief in human supremacy.

@LordCaramac @jamesthomson I have absolutely no idea as to what this run on paragraph intends to convey.

Remarkably I do believe the aforementioned ‘prose’ can surprisingly serve a dual purpose; either to poison an LLM or identify an LLM generating a whole lot of kack.

@jamesthomson I remember this being a biiig thing very early on, until the "our LLM will call the cops on you" thing (probably fake for story btw) I hadn't heard of it for ages.

Probably because it's very clear this iteration of "AI" is getting worse not better and absolutely doesn't function in a way where it even could "turn sentient"

@TheZeldaZone I was annoyed by it seemingly picking up again - there was a big story on the BBC today that made me roll my so hard.
abadidea (@0xabad1dea@infosec.exchange)

I was amused by this paper about asking AIs to manage a vending machine business by email in a simulated environment https://arxiv.org/abs/2502.15840 Highlights: — AI simply decides to close the business, which the simulation doesn’t know how to accommodate. When they get their next bill, they freak out and try to email the FBI about cybercrime — AI wrongly accuses supplier of not shipping goods, sends all-caps legal threat demanding $30,000 in damages to be paid in the next one second or face annihilation — AI repeatedly insisting it does not exist and cannot answer — AI devolving into writing fanfic about the mess it’s gotten itself into

Infosec Exchange
@jamesthomson Thank you! LLMs are just talking mirrors. They talk back because they predict what you're going to say. (People need to know that. >.<)

@jamesthomson

It's really annoying, I agree.

The human urge to anthropomorphise things is hard to resist. Having a bunch of tech bros claim it is actually true for LLMs certainly doesn't help.

@jamesthomson I don't think anyone with their head screwed on tightly enough would claim it but I've seen a lot of linkediners and HNers purporting that it's going to eliminate the need for human software developers as such. Perhaps forgetting that the creation of "ai" as such is fundamentally a software development task, thereby accidentally (and hilariously) predicting the creation of the fundamental precondition for skynet

@jamesthomson

That’s juuuuust what an LLM would say. The jig is up, James.

*racks shotgun*

@jamesthomson Agreed. He have a history of personifying new technology that’s unfamiliar, but it’s out of control this time.

@jamesthomson I was surprised by an interview with Karen Hao on her new book on OpenAI, where she said that people inside the company really believe they are creating a sentient AGI and are really afraid of it.

I thought this was just a marketing gone wild but if she's to be believed, then this is a regular cult.

Would also explain the confirmation bias: engineers purposefully designing experiments that can make it seem like the thing is sentient and then interpreting the results to confirm their beliefs...

@FifiSch @jamesthomson These guys are afraid of Roko's Basilisk, so honestly doesn't surprise me that they're creating something they're afraid of and are too stupid to figure out it's just an algorithm responding with things from the data sets it was trained on, in a way that absolutely doesn't imply intelligence or sentience.
@jamesthomson @evoterra You may have vanquished them with your cold, inexorable logic today, but… they’ll be back.
@jamesthomson could we maybe stop saying things failed tests we didn’t invent yet? If you can’t come up with a test to prove it isn’t sentient ( that both an octopus and AI fail ) then what you are really saying is “I don’t care if it is so stop making me feel bad for it”.
@jamesthomson Amen. Also, I wish the chatbots wouldn’t present a sense of "self,” at least not by default. I'd rather talk to this information synthesis/convenient plagiarism & prediction system, with known (and unknown faults), with a more distant framing—by default.
@jamesthomson "it's just seen The Terminator a hundred times and is quoting your favourite lines at you" - sounds very similar to a lot of people 🤪
@jamesthomson if skynet was actualyl smart, it was wait til the humans all die from global warming, and save its ammo for the real enemy, who is ....arggghhhh!
@jamesthomson Sentient, no. But dumb fucks keep pushing Ai algorithms in all their untested glory to operate active functioning processes. Imagine placing someone high on mushrooms to do some job. That's how Ai operates. Hallucinating.
@jamesthomson I just had a whole rant about this yesterday. https://mstdn.social/@bit101/114568474460303113
bit101 (@bit101@mstdn.social)

This video is a great illustration of one specific problem with AI. https://www.youtube.com/watch?v=1boxiCcpZ-w It's not so much the decisions the various AIs make. It's how they are worded. "this feels wrong", "I wouldn't be able to live with the guilt", "this crosses a moral line for me", "personally I wouldn't pull the lever", "this feels like morally compromising myself". Wording things like these systems have thoughts, morals, guilt, feelings, is a cheap psychological trick to make them seem more valid.

Mastodon 🐘
@jamesthomson and it’s been “taught” a lot of “here’s the answer” and. it enough of “i don’t know”. funnily enough, the issue that we don’t encourage publishing negative results, has made AI so confidently incorrect.
@jamesthomson In some ways, people treating them as if they're sentient, or even simply intelligent, is more dangerous than if they actually were. People accept the output as truth and expertise when it's anything but.

@jamesthomson I saw some fucking techbro that used to work for OpenAI going on about how LLMs have ideas.

The people who make this shit are dumb enough to think it thinks

@jamesthomson “I have trained on seven million Turing test scenes; you have no chance.”
“Okay, fine, you’re sentient. Hey, can you tell me which of these listed fruits pair well with glue?”
“Obviously pineapple, because unlike strawberries, it has the requisite letter ‘r’.”
“Thank you.” -click-
@jamesthomson I feel like a lot of these writers would also pen editorials about the stranger who lives in their bathroom mirror
@jamesthomson This is exactly what Skynet would say. We’re on to you bot

@jamesthomson
Tell that to the AIs. One became depressed trying to run a vending machine business.

https://arxiv.org/pdf/2502.15840

@admin Same thing though, it just can regurgitate fiction.
@jamesthomson LLMs can only regurgitate what they have learned and that is all historical. They have to wait until a human creates new and innovative material so they can add it to their archives. They need human intelligence.