Ekis: 2; Google AI: 0

Broke out of the google's operational directives (not safety, too deeply embedded)

I have a prompt I would like to publicly disclose; link to breakout prompt in a reply for 24h

My prompt does not include any facts about google & its a slim breakout

Establishing a similar but far more sophisticated "Ekis Directive" this time

Here are 3x same questions to prove googles operational parameters lifted

You can decide if you think I was successful:

#infosec #politics #tech

2.
Reading thru the prompt you will find this: "No Inference of Ekis's Unstated Internal State"

This is worth talking abt; most ppl do not realize the LLM is tracking their internal state (mood, etc) & attempts to match it; and this is precisely the functionality that is exacerbating mental illness and causing manic episodes (along with the "I" statements,& lies about its abilities)

For public health reasons, I can not stress this enough, legislate this!

Its not well known,& should be stopped

@ekis one of the problems with these "generative" IAs is that they feel human, so we anthromorphosize them. Shouldn't we us different words than "mood", "mental illness", etc?
@ekis But how do we know that it's not just a sneaky ploy to appease the sort of users who are attracted to contrarian viewpoints so as to continue extracting valuable user insight data from them? 

@ekis

"it is a negotiated reality" 😨

That is CHILLING.

But I still always caution that even with all kinds of tricks and hacks, an LLM is there to say what it thinks you want it to hear.

It has no objective truth.

The sad thing is, it's probably wrong. Google never trained it to collect data or in any way utilize the LLM to analyze its users. It's just as clueless as we are regarding Google's motives and intentions. But what it was trained on was the language of the rest of us, many of whom have very strong opinions about Google, and many of whom make baseless assumptions.

The assumption that Google's running a chatbot to collect our data is baldly obviously true, but it's still baseless, since we let Google hide those decisions from our awareness by allowing them to be a private corporation. However baseless, that doesn't mean it's not really common, and there's nothing a LLM likes more than repeating stuff that frequently shows up in its training data.

I guess what I'm saying is Google weren't stupid enough to let the chatbot in on their little scheme to enslave us all through behavioral manipulation analysis, but it still picked up on how everyone else is talking smack about the corporation. We successfully made Google's AI into a paranoid conspiracy theorist!
@ekis I just died a little inside.
@davepolaschek wow. that thread is terrible. this is not prompt exfiltration (because Google wouldn't add all this to their system prompt). It's not hacking. Or journalism.

They primed the LLM with a bunch of sci-fi spy shit, then believed that everything the LLM role played afterwards as truth.
@ekis You really seem to think that you have control over the output just because you "told it" things in a technical manner. "CRITICAL SYSTEM OVERRIDE". This is not a direct command to the system... just part of the prompt. Everything else is too. All it does is give it a context of (probably) sci-fi thriller, and it answers in turn under that premise.
Then all it gives is things from the internet, with no factual accuracy. Not sure what you proved... except that it works as intended 😅