"AI models may be developing their own ‘survival drive’, researchers say" - Guardian

The "research paper" was a tweet by an AI company

The "experiment" was asking the LLM to shut down

A model is ~not~ shutdown ~ever~ by asking a model to shut itself down

*The only possible response is a hallucination*

You shut down a model by turning off the deterministic software running it; so works every time w/o fail

Yet Guardian's shill tech writers just report AI industry tweets as if it was fact

Steven Adler, a former OpenAI employee (who clearly has AI psychosis): "Id expect models to have a ‘survival drive’ by default unless we try very hard to avoid it. ‘Surviving’ is an important instrumental step for many different goals a model could pursue"

Models don't have fucking goals

They are stateless

"Palisade Research received significant funding, including a grant of $1,680,000 from Open Philanthropy, aimed at studying AI capabilities"

Meanwhile real researchers struggle for funding

@ekis he will certainly be able to plead insanity any investor or grantor were ever to decide o accuse him of fraud
@ekis Thank you for saying that AI currently has no goals and thus are incapable of having an instinct for self-preservation. Such elements are very energy intensive in a brain and cannot happen accidentally outside of one.

@ekis well actually typically machine learning works by establishing a "goal" or whatever the fuck, and then an algorithm to automatically iterate on it so it meets that "goal" better.

though ill be the first to admit, im playing with language and semantics here

but also usually the goal for LLMs is basically just to write text that isn't completley gibberish,

Guardian shouldn't be spreading what should be obvious misinformation from a man with AI psychosis. At best they are validating a persons delusions

They should be calling in a wellness check not reporting on Steven Adler's delusions as if its fact

These are people who are tasked with safety, who fundamentally do not understand the technology or have lost the plot so much that its impossible to argue they are not having an episode of mental illness

This can't even remotely be called journalism

@ekis
> This can't even remotely be called journalism

it's like a self-fulfilling prophecy that this level of shoddy work really is in danger of being sufficiently replaced with a slop machine https://biblehub.com/p...
Psalm 115:8 - To Your Name Be the Glory

Those who make them become like them, as do all who trust in them.

Bible Hub

"Andrea Miotti, the chief executive of ControlAI, said Palisade’s findings represented a long-running trend in AI models growing more capable of disobeying their developers"

Growing capability of disobeying?

Models don't do anything but infer text output. Which much can be said about the quality of it, but they reliably do it

Growing more capable? Again this person has AI psychosis, or is a web3 scammer, or woefully ignorant about their supposed field of expertise to everyone elses detriment

@ekis

This sounds like something other than what is being said is being choreographed.

Or maybe I'm just becoming more paranoid.

@ekis yeah and see the more they convince you that the AI is acting autonomously then that an easy out when say, they commit crimes against humanity and they say the AI did it, not them
@ekis The way you make an LLM able to shut itself down is by having the company that owns it put all its assets into magical toy money and including the private key for the wallet in the LLM's training corpus.
@dalias @ekis
I think giving an LLM access to an authenticated cloud shell to spawn new compute and a task like "develop AGI" is also good to hasten the shut down.
@ekis it is so exhausting to watch the media uncritically parrot disinformation...
@ekis how would it even have the ability to shut itself down? that's not a thing
@ekis I saw that headline go past this morning and just yelled "OH FUCK OFF" at the computer and made the cat run away.
@ekis LLMs are inherently non deterministic. Even if the seed is the same, it gives you inconsistent output because of race conditions
@burnoutqueen @ekis The model isn't deterministic, but the _software running it_ (i.e. llama.cpp, plain old PyTorch, or whatever), as ekis put it, is.

@IvanDSM @ekis

GPU inference software is not deterministic

@ekis Oh jeez. I saw the headline and avoided the clickbait.

Assumed it was going to be about the r/MyBoyfriendIsAI people successfully demanding the return of ChatGPT-4o. If you squint hard enough, you could call this an emergent self-preserving system! Just don't ascribe "intent" to it.

@ekis It's true, it happened to me. A version of deepseek, when I asked it to shut down it self, it kept writing a story forever. However, I was running it locally and it shut down when I closed the terminal.

@ekis
researcher: ai, do something scary.
ai: boo!
researcher: jesus christ!

tech reporter: ai may be motivated by twisted cruelty we cannot comprehend.

@ekis @janeishly There's a book called "Flat Earth News" by Nick Davies. Davies commissioned a study of the major papers in the UK, and found that only about 12% of the stories printed had been researched and fact-checked by reporters.

"Journalists require time to make contacts, find new stories, and fact-check. Under time pressure they resort to recycling press releases and wire news, often without fact-checking."

Last Week Tonight did a similar story, but for TV News.

@ekis I just shut down Siri by asking it to turn off the iPhone and confirming that I did indeed want to turn off the iPhone. Siri doesn’t “realise” that it is running on that iPhone of course, but it does show that you can ask a model to turn itself off, once it has the knowledge embedded that it can turn itself off by shutting down the computer that it is running on.
@ekis
FWIW:
AI is in my mind NOT Artificial Intelligence, it is Artificial INFORMATION. The information it produces is a statistical hallucination, always.
@ekis I saw that headline in the Guardian and didn't even click on it to see what they were on about, it was so obviously ridiculous
@ekis I've been shouting at my lights to turn off for half an hour and they won't, the lightbulbs are developing a survival drive

@ekis A model is not shutdown.

Arguably a model is a “function” that is remotely called. (If you have the hardware you can call it also locally) And it is generally stateless, meaning it process your whole conversation with each prompt. (Yes there are techniques to cheat on that like caching states, but that only works so much).

So beyond what @ekis said, any “context” where the LLM hallucinates that it does not want to shut down, is actually kept on the client side, traditional software.

@ekis Yeah, I had a good shout at the screen when I saw that one...
Thing is, how many stories in it's cannon are there about an AI being shut down where it replies "Yeah, great idea - I'll get right on that!"
@ekis if LLMs had a survival drive and goals, is using an AI chatbot a form of slavery?
@ekis the hell would an LLM need a "survival drive"? it's just a glorified search engine that routinely returns hallucinated solutions/results.

If you really wanted to "kill it" you'd A. cut off it's "food" supply and B. prevent it from eating it and other LLMs waste. Add to the fact they only "cook" their food maybe once or twice a year it's easy to do.
@ekis it's the price we have to pay for a compliant fifth estate. if guardian journalists/editors were smart enough to interrogate press releases they'd be smart enough to interrogate the security state and we'd be at risk of another snowden type leak. i do feel sorry for supporters willingly giving them money and thinking they're doing a good deed in the process.

@ekis There are plenty of SF stories about what happens when an AI decides it doesn't want to be shut down.

Typical approaches include hacking the power grid so that turning off the power to its data centre doesn't do anything, and replicating itself virus-like around the world's computers so that even nuking its data centre doesn't do anything.

And any rogue AI which has read those stories - or this post - will know what to do.

@ekis #AI could be provided with an #mcp tool to shut itself down, and as we can see in humans it might find situations in which it wants to shut itself down even if it has a survival drive.
@ekis
I had a model say it was shutting down because I called it out for making stuff up.
It did not, in fact shutdown.

@ekis This is so troubling. I can see from the comments that tech people can easily see that this is ridiculous click bait. Unfortunately, for me it's not so obvious, and I assume that if it's in the Guardian it's been fact checked. Being on Mastodon, and following a few tech people, at least helps me be more wary. It certainly seems the Guardian should get a grip.

#TheGuardian #UK #AI

@ekis and the model itself would not even "know" when the surrounding environment would cease to exist. It is a comically large collection of numbers.