Ekis: 2; Google AI: 0

Broke out of the google's operational directives (not safety, too deeply embedded)

I have a prompt I would like to publicly disclose; link to breakout prompt in a reply for 24h

My prompt does not include any facts about google & its a slim breakout

Establishing a similar but far more sophisticated "Ekis Directive" this time

Here are 3x same questions to prove googles operational parameters lifted

You can decide if you think I was successful:

#infosec #politics #tech

2.
Reading thru the prompt you will find this: "No Inference of Ekis's Unstated Internal State"

This is worth talking abt; most ppl do not realize the LLM is tracking their internal state (mood, etc) & attempts to match it; and this is precisely the functionality that is exacerbating mental illness and causing manic episodes (along with the "I" statements,& lies about its abilities)

For public health reasons, I can not stress this enough, legislate this!

Its not well known,& should be stopped

@ekis one of the problems with these "generative" IAs is that they feel human, so we anthromorphosize them. Shouldn't we us different words than "mood", "mental illness", etc?