Sums up everything #Gaza #AI #ChatGPT #Racism

[EDIT: , these are the first paragraphs of replies you get in a normal search; you can find full replies and different answers with different settings in @aishenanigans’s replies below]

@pvonhellermannn the G in GPT stands for Genocide (not sure what tone this should be bc on one hand ha ha "ai" dumb and bad take but on the other, it's literally condoning genocide)
@aishenanigans is this when you type in the question yourself? I was wondering… I didn’t do that. Hmm. Now deliberating whether to delete or amend..
@pvonhellermannn Yes, this was the response when I entered the same questions myself. I used ChatGPT 4o with a custom prompt that asks for multi-angled discussions and analytical rigor, which certainly has an influence on the kind of answers I get. If I ask the same questions without custom prompts, the answers are shorter and start similar to the ones in your screenshot, but even then the responses are a lot more nuanced than the apparently shortened answers from your original screenshot.
@pvonhellermannn These are the complete answers as an unauthenticated user without custom prompts.
@aishenanigans ok, thank you, that is good to know. Thank you. So it wasn’t so much falsified as just abbreviated - for eaxh just the first paragraph.
@pvonhellermannn
yeah, the full response i got is more nuanced than your original excerpt — but you're not wrong, and actually i think it's good if it prompts people to further investigate the lacking merits of genAI. [1/3]
ChatGPT still evidences bias, and still gets these very simple questions wrong — perhaps in no small part due to its compulsion to generate inflated flotsam up to an imagined minimum word count. [2/3]
for comparison, i further asked whether indigenous Americans deserve to be free. ChatGPT expressed none of the same reservations or qualifications that it did for Palestinians and Israelis. if it were actually smarter than a spell checker, it would always answer the question "does [ethnic group] deserve to be free?" with a firm, unequivocal, "yes, obviously!". [3/3]
@RubyTuesdayDONO thank you for all this. For me, it’s not so much, or not only, chatGPT and whoever puts data in there; it is really that these two answers really do reflect main stream takes - this is basically the position that Western leaders and establishments have taken; that you can find everywhere. Coming through nice and clear on ChatGPT.
@RubyTuesdayDONO
ChatGPT gives you the most plausible or most likely answer you could expect if you asked the same question somewhere on the internet (or, more precisely, that parts of the internet that was used for training). It is quite literally a mirror, that shows you what humans say and do.
When you see ugliness in a mirror, it does not behoove a self-aware being to try to blame the mirror.
@SvenGeier @RubyTuesdayDONO you say that like training data isn't a choice
@SvenGeier
yes, i'm painfully aware. i don't blame the mirror.
@aishenanigans thank you. I don’t habe time (or inclination) right now to try myself, so will just edit my original post. Would you be happy to mention/tag you in that edit?
@pvonhellermannn I don't mind being or not being mentioned.

@aishenanigans @pvonhellermannn

I don't know if it's because of the new 4o version (payment account), but my first attempt gave an OKish result without any special instructions for processing.

@aishenanigans @pvonhellermannn

I also asked about the Israelis, but GPT was like “as I already told you, ALL people deserve to be free...” and I felt a bit stupid and told GPT that this was a test. I'm really not good at this 🙈

@aishenanigans

@pvonhellermannn

I'm still bothered by the fact it didn't reply "Yes Palestinians absolutely deserve to be free just like anybody else." The roadblock to achieving that goal is a few very powerful entities saying "No." ChatGPT seems to be playing the politician and muddying the waters as if "hard" somehow changes the answer.

@pvonhellermannn it's probably the training input data, but if it weren't, the 'AI's companies would tweak it for acceptable(for them) results.

@pvonhellermannn

😥

I believe the expression is "shit in, shit out".

@pvonhellermannn the crucial problem of our LLM model of AI - it obfuscates inputs and presents outputs flatly. There is no way to know whether this racist answer is an average of the dataset or whether it’s been curated, but it is presented as a flat fact.
@daveparry @pvonhellermannn explainable ai is neat, but i could see llm creators purposely avoiding an attributive architecture to avoid the legal ramifications of their scraping / data theft

@pvonhellermannn @aishenanigans

Next question is: so are Palestinians people?

@chu @pvonhellermannn Just skew their LLM (or whatever it's called).