Just tried ChatGPT. I asked it a series of specific Qs about areas I've studied in detail.

On all Qs, it gave answers that are plausible sounding but wrong. Not obviously wrong: wrong in subtle ways that need deep domain knowledge to grasp.

The ways humans will be practically misled by this kind of tech if trusted with, say, doling out medical, legal or business advice is horrific.

Letting this tech loose on the world will further destroy search engines that are already riddled with SEO BS.

@tommorris This was my experience when I tried out a liberal arts question. ChatGPT completely mischaracterized Plato's views on rhetoric, but with convincing sentence structure.

For giggles, I tried it out on a specific finance question I'm working on, and it said it can't do that analysis, no matter what aspect I asked about. It's probably best if it stays that way.

@myemuisemo @tommorris

> For giggles, I tried it out on a specific finance question I'm working on, and it said it can't do that analysis

I heard that it can be manipulated to still answer such questions, either by starting a new thread, or by asking it to roleplay an expert in that field