Just tried ChatGPT. I asked it a series of specific Qs about areas I've studied in detail.

On all Qs, it gave answers that are plausible sounding but wrong. Not obviously wrong: wrong in subtle ways that need deep domain knowledge to grasp.

The ways humans will be practically misled by this kind of tech if trusted with, say, doling out medical, legal or business advice is horrific.

Letting this tech loose on the world will further destroy search engines that are already riddled with SEO BS.

@tommorris This is an absolutely trivial bit of confirmation, but if you ask it to generate D&D character sheets it can generate something where all the numbers are in a plausible range, but if you start applying the normal character generation rules you discover that they're all wrong.

Which is exactly as you say: wrong in subtle ways that need deep domain knowledge to grasp.