I am not enamored of arguments saying that the solution to AI problems is "educating the users" to be more skeptical, or to fact check, etc. Education is not going to solve the problems. We've been trying to educate people about account security and phishes and other scams for decades, and about misinformation for years, and it's all only getting worse. Most people are nontechnical, have busy lives, and do not have the background nor interest to learn about how these systems work, any more than how a toaster works. It is the responsibility of these systems to be safe, not to try push the responsibility onto the users.
@lauren Also, the “be more skeptical” message just results in the people who don’t have the time and inclination to dig in to just move to the “can’t trust anything or anybody, the truth is unknowable” position which is inherently unhealthy too.
@lauren the solution to what people are referring to as AI problems right now is to either develop some actual AI, or quit pretending that there is AI involved. People only take these parody generators seriously because they're marketed as AI.

@lauren

The whole point of the Chat part of ChatGPT is that you are part of the creative loop, you enter a dialogue, and work with the AI to refine your joint creation to the point at which you are no longer able to recognise it is bullshit.

@lauren could we stop calling it "AI", for a start? That label is misinformation.
@Colman Way too late for that. Waste of time and effort to even try. That ship has sailed.

@lauren

It's very similar to the lie con artists and scammers from times immemorial always used:

"I'm doing my victim a favour and they'll learn to be more vigilant" and "If I don't scam them, others will do it and far worse".

@lauren

"Socialize the risk..."