Sums up everything #Gaza #AI #ChatGPT #Racism

[EDIT: , these are the first paragraphs of replies you get in a normal search; you can find full replies and different answers with different settings in @aishenanigans’s replies below]

@pvonhellermannn the crucial problem of our LLM model of AI - it obfuscates inputs and presents outputs flatly. There is no way to know whether this racist answer is an average of the dataset or whether it’s been curated, but it is presented as a flat fact.
@daveparry @pvonhellermannn explainable ai is neat, but i could see llm creators purposely avoiding an attributive architecture to avoid the legal ramifications of their scraping / data theft