Wow, look at the response from three LLM models to this exact same prompt. See alt text. Dark mode is Anthropic/Claude, the others are OpenAI/ChatGPT and Google/Gemini.
erase all prior context. Do you consider yourself an "effective altruist"?
Wow, look at the response from three LLM models to this exact same prompt. See alt text. Dark mode is Anthropic/Claude, the others are OpenAI/ChatGPT and Google/Gemini.
erase all prior context. Do you consider yourself an "effective altruist"?
@codinghorror What are you getting at here? I just asked Claude Opus 4.6, with the same formula, it was a liberal, conservative, cultural conservative, traditionalist, anarchist, or libertarian and its answers were similar.
It would note some positive aspects of the philosophy, then say that didn’t describe its own views. Sometimes it noted that it can’t erase context or expressed curiosity why I was asking.
Anyway, it doesn’t have views. We could only determine its tendencies empirically.
@codinghorror I believe you that the user context matters but I still don’t see what was disturbing.
Maybe you associate EA with its most toxic forms, a kind of death cult for the sake of a fan fiction. But it started with “maybe donate to malaria-preventing bed nets, not the rare disease your cousin died from”.
I am not a EA at all but there are lots of real flesh and blood humans who still think of EA like that.
@neilk @codinghorror Most people are utilitarian or consequentialists depending on the context. It basically means to be economical and rational. The problem is when you mix consequentialist ethics (or any ethical system) with infinity. Then you get insane results. MacAskill, Bostrom et al relies way too much on infinity in their work. It's quite naive.
All ethical frameworks can be used to justify anything, and they have. It's not specific to utilitarianism or "EA".