The "ChatGPT has a liberal bias" paper has at least 4 *independently* fatal flaws:
– Tested an older model, not ChatGPT.
– Used a trick prompt to bypass the fact that it actually refuses to opine on political q's.
– Order effect: flipping q's in the prompt changes bias from Democratic to Republican.
– The prompt is very long and seems to make the model simply forget what it's supposed to do.
By
@sayashk and me, summarizing our analysis and a separate one by Colin Fraser.
https://www.aisnakeoil.com/p/does-chatgpt-have-a-liberal-bias