This is a very good, accessible explanation of the intellectual fraud at the heart of #ChatGPT. Especially w/r/t the downplaying of the human labour needed for it to function at all:

“So, not only is ChatGPT not human, but in order to for it to appear human, its creators had to dehumanize real humans. … Left without guardrails and handlers, ChatGPT wouldn’t know enough to not ape and amplify the worst of the world.”

https://www.theglobeandmail.com/opinion/article-chatgpt-is-a-reverse-mechanical-turk/

ChatGPT has convinced users that it thinks like a person. Unlike humans, it has no sense of the real world

The recently-launched chatbot has convinced users that it thinks like a person, but its interior is filled with an arcane statistical soup of code and complex linguistic connections

The Globe and Mail
@bhaggart I don’t think it really points to any ‘intellectual fraud’. The pay and conditions of the workers screening output and prompts sounds pretty appalling and that is an important issue. But Open AI has always been clear that ChatGPT doesn’t *understand* it’s output - and that there are human-curated guard rails around certain topics