By now you know that #ChatGPT will make up nonsense, presented with confidence.

Useful framework by AI and Data Policy lawyer:

@jvt it’s oversimplified to the point of being misleading, even noted in the associated article.
@paulmwatson I would probably have changed a few minor details myself eg “safe” is overstated imv. It’s produced by a lawyer so see it in that context, but I think the overall thrust is correct. What specifically do you find offensive or inaccurate?
@jvt that “truth” is the first gate to using ChatGPT when harm can be caused. I read the associate article earlier and the author also agreed it was too simplified that first gate

@paulmwatson gotya. For me the “harm” is that (most?) people see it as a credible source. It isn’t. OpenAI itself recommends double checking accuracy.

Of course that doesn’t mean it’s generally inaccurate. But do users assume they get the truth when the response is so confident? Probably, why not?

In that sense I’d argue that “truth” is a useful starting point, right now at this point in time.

@jvt that is one harm agreed but amongst many harms the truth can be told in a harmful manner. Truth itself can be harmful. ChatGPT has no appreciation for the way it converses beyond manual guardrails that are limited in scope.