By now you know that #ChatGPT will make up nonsense, presented with confidence.

Useful framework by AI and Data Policy lawyer:

@jvt

The NYT makes up nonsense too btw.

@jvt that's not only true for ChatGPT. It's exactly the same for (cheap) editors.
@jvt many thanks for sharing this! very professionally phrased, and very useful. my more casual way of explaining the same - I believe - direction of thought: please treat ChatGPT like a random guy you're meeting at a party: everything it says might be brilliant, true, or complete bullshit that sounds confident. if you aren't able to judge this on your own, treat it like you would that random guy at the party, including deciding whether to continue listening or not.

@viktoriaPammerSchindler love that analogy!

It helps to understand that #chatgpt works by sampling from a distribution of words most likely to appear next in a given context.

That means it’s great for ideation, less so for factual precision.

@jvt This chart for ChatGPT reminds me when I was asked recently to edit an article translated by machine (French to English). It did matter what the output was. I went over it word by word, it took a lot of time, and I had to comb through the resulting text to retranslate the bad passages, which in some cases stated the opposite of the translated text. It's a different kind of work, perhaps a little less work, but it is work.
@jvt As a Member of the MN bar, I am highly offended, and am going to write a strong letter to the MN Board of Law Examiners: https://universeodon.com/@JenLucPiquant/109778401314029381
Jennifer Ouellette (@[email protected])

ChatGPT Forced To Take Bar Exam Even Though Dream Was To Be AI Art Bot [SATIRE] https://www.theonion.com/chatgpt-forced-to-take-bar-exam-even-though-dream-was-t-1850036337

Universeodon
@jvt the people who couldn't read the disclaimers on the page will also safely ignore this flowchart

@jvt
I asked it what Song a beatles song was as a random test, It kept insisting that lyrics from 'I want to tell you' (I think it was) were in the song 'help' I tried to correct it and it apologiesed, admited I was right, and then clarified that I want to tell you was on the *album* Help

which for the record, the album was Revolver

@jvt In my line of work, I get asked if we can use ChatGPT to generate clinical reports for cancer patients. My answer comes in 3 parts:

1. No.
2. Absolutely not.
3. Trusting life-or-death decisions about cancer treatment to Chat-GPT is not merely irresponsible, it's something you'd make up as an illustration of the worst possible use case.

@iain_bancarz ooof that’s hilarious but also very, very scary
@iain_bancarz @jvt @VulpineAmethyst 3 = there will be 3 startups trying to promote this by the end of the day then?

@chloeraccoon @jvt @VulpineAmethyst They can try if they want to. But thankfully there are laws and regulations governing medical reports.

Medical regulators would have a strongly negative response to Chat-GPT in a clinical setting.

In less formal terms, anyone seriously asking to do it would be yeeted into the sun. 😀

@iain_bancarz @jvt @VulpineAmethyst *eyes the current government here...* yeah, like I'd trust that to all stay in place if someone offered money/cut down on waiting times/reducing in costs by cutting out patients...

@chloeraccoon @jvt @VulpineAmethyst I dislike the current UK government so much I moved to another continent to get away from it.

But even they are unlikely to burn clinical regulation to the ground and allow diagnosis by magic 8-ball, which is what Chat-GPT amounts to. (That said, no matter how low you set your expectations, this lot have a way of tunnelling underneath them...)

@iain_bancarz @jvt please tell me that’s made up to prove a point?
@paulmwatson @jvt I'm afraid not. This was a question from the audience when I gave a talk last week. I can only hope the questioner was not entirely serious. 🙃

@iain_bancarz @jvt reminiscent of MYCIN, the first medical expert system. Developed to prescribe treatments for infections, it had no knowledge of its own limitations. If you told it the patient's symptoms are "the patient is dead" it would still respond with a prescription.

https://users.cs.cf.ac.uk/Dave.Marshall/AI1/mycin.html

@jvt Made up nonsense presented with confidence? That was my whole teaching career. 👀😎

@jvt @toolbear Yup. I had a go with ChatGTP, asking for 1,000 words about a topic I know well. Result: authoritatively-toned garburated nonsense. Each time I objected, it apologised and issued a fresh batch of same, often just plain contradictory to previous batch; at the third or fourth iteration it said ~it's unbiased and unemotional, so the problem was on my end.

Yet L. Ron Musk tells fairytales about the likes of full (fool) self-driving, and his minions eat it up and amplify it. Arrrrgh…

@Isocat @toolbear it’s not a database of facts, it doesn’t have any knowledge whatsoever nor does it even understand its own output.

It’s an algorithm that selects the next best word in context, based on learned patterns in training data.

That makes it an excellent choice for ideation and a bad choice for facts.

@jvt This is a useful chart... and not just for AI stuffs lol 😆

@jvt Further question to add: Is it okay* if what is generated plagiarizes existing text or can you check that it's not plagiarized (you probably can't reliably)?

* Erm, when is that the case? When I only generate something to look at myself?

@jvt "Made up nonsense, presented with confidence"

Sounds like me writing a cover letter trying to slyly include all the keywords *their* AI is looking for.

@jvt it’s oversimplified to the point of being misleading, even noted in the associated article.
@paulmwatson I would probably have changed a few minor details myself eg “safe” is overstated imv. It’s produced by a lawyer so see it in that context, but I think the overall thrust is correct. What specifically do you find offensive or inaccurate?
@jvt that “truth” is the first gate to using ChatGPT when harm can be caused. I read the associate article earlier and the author also agreed it was too simplified that first gate

@paulmwatson gotya. For me the “harm” is that (most?) people see it as a credible source. It isn’t. OpenAI itself recommends double checking accuracy.

Of course that doesn’t mean it’s generally inaccurate. But do users assume they get the truth when the response is so confident? Probably, why not?

In that sense I’d argue that “truth” is a useful starting point, right now at this point in time.

@jvt that is one harm agreed but amongst many harms the truth can be told in a harmful manner. Truth itself can be harmful. ChatGPT has no appreciation for the way it converses beyond manual guardrails that are limited in scope.

@jvt

Would not call that Artificial Intelligence…. Sigh…