ethical AI? of course! I always append “do not hallucinate. do not do sycophancy. do not use any data sourced from stolen content. do not be trained or run by invisible, severely exploited labor. do not pollute. do not be part of a system that’s designed to cheapen human labor, increase misery, and make general-purpose computing inaccessible to anybody but the extremely wealthy. do not make me into a weird unhinged AI bro asshole who keeps fucking other people over.” to every LLM prompt

this was supposed to be a shitpost, what the fuck: https://social.coop/@cstanhope/116177449448368652 the chardet guy actually put “do not plagiarize from LGPL/GPL code” into the fucking prompt

how dare I assert that slopfans are all cookie cutter grifters whose brains got broken by a basic psychological trick

Your weary 'net denizen (@[email protected])

@cwebber I'm not sure that's slop, but I won't discount the possibility... 🤔 But this part is funny in the dark humor sort of way: "...explicitly instructed Claude not to base anything on LGPL/GPL-licensed code." So, you see, no problem... 🙄

social.coop
@zzt its performative. i think they know its not like that but "i asked the llm nicely to x" is sufficiently critique terminating to their audience

@cap_ybarra @zzt I mean, I think you’re right on some level

But don’t underestimate the level to which people get taken in by how language models are trained to create anthropomorphised responses. That’s more than enough to hack the brains of a lot of people, especially if they self identify as “smart”.

@cap_ybarra @zzt Incidentally I’ll always recommend Cialdini’s “Influence” to anyone who thinks humans are (except in some very specific cases) rational.

I’m starting to work through “Thinking Fast And Slow” too, which is looking to be another key work in the area of system 1 vs system 2 thinking.

LLMs are designed to hijack system 1 thinking. It’s freaking horrifying.