By now you know that #ChatGPT will make up nonsense, presented with confidence.
Useful framework by AI and Data Policy lawyer:
By now you know that #ChatGPT will make up nonsense, presented with confidence.
Useful framework by AI and Data Policy lawyer:
The NYT makes up nonsense too btw.
@viktoriaPammerSchindler love that analogy!
It helps to understand that #chatgpt works by sampling from a distribution of words most likely to appear next in a given context.
That means it’s great for ideation, less so for factual precision.
ChatGPT Forced To Take Bar Exam Even Though Dream Was To Be AI Art Bot [SATIRE] https://www.theonion.com/chatgpt-forced-to-take-bar-exam-even-though-dream-was-t-1850036337
@jvt
I asked it what Song a beatles song was as a random test, It kept insisting that lyrics from 'I want to tell you' (I think it was) were in the song 'help' I tried to correct it and it apologiesed, admited I was right, and then clarified that I want to tell you was on the *album* Help
which for the record, the album was Revolver
@jvt In my line of work, I get asked if we can use ChatGPT to generate clinical reports for cancer patients. My answer comes in 3 parts:
1. No.
2. Absolutely not.
3. Trusting life-or-death decisions about cancer treatment to Chat-GPT is not merely irresponsible, it's something you'd make up as an illustration of the worst possible use case.
@chloeraccoon @jvt @VulpineAmethyst They can try if they want to. But thankfully there are laws and regulations governing medical reports.
Medical regulators would have a strongly negative response to Chat-GPT in a clinical setting.
In less formal terms, anyone seriously asking to do it would be yeeted into the sun. 😀
@chloeraccoon @jvt @VulpineAmethyst I dislike the current UK government so much I moved to another continent to get away from it.
But even they are unlikely to burn clinical regulation to the ground and allow diagnosis by magic 8-ball, which is what Chat-GPT amounts to. (That said, no matter how low you set your expectations, this lot have a way of tunnelling underneath them...)
@iain_bancarz @jvt reminiscent of MYCIN, the first medical expert system. Developed to prescribe treatments for infections, it had no knowledge of its own limitations. If you told it the patient's symptoms are "the patient is dead" it would still respond with a prescription.
@jvt @toolbear Yup. I had a go with ChatGTP, asking for 1,000 words about a topic I know well. Result: authoritatively-toned garburated nonsense. Each time I objected, it apologised and issued a fresh batch of same, often just plain contradictory to previous batch; at the third or fourth iteration it said ~it's unbiased and unemotional, so the problem was on my end.
Yet L. Ron Musk tells fairytales about the likes of full (fool) self-driving, and his minions eat it up and amplify it. Arrrrgh…
@Isocat @toolbear it’s not a database of facts, it doesn’t have any knowledge whatsoever nor does it even understand its own output.
It’s an algorithm that selects the next best word in context, based on learned patterns in training data.
That makes it an excellent choice for ideation and a bad choice for facts.
@jvt Further question to add: Is it okay* if what is generated plagiarizes existing text or can you check that it's not plagiarized (you probably can't reliably)?
* Erm, when is that the case? When I only generate something to look at myself?
@jvt "Made up nonsense, presented with confidence"
Sounds like me writing a cover letter trying to slyly include all the keywords *their* AI is looking for.
@paulmwatson gotya. For me the “harm” is that (most?) people see it as a credible source. It isn’t. OpenAI itself recommends double checking accuracy.
Of course that doesn’t mean it’s generally inaccurate. But do users assume they get the truth when the response is so confident? Probably, why not?
In that sense I’d argue that “truth” is a useful starting point, right now at this point in time.
Would not call that Artificial Intelligence…. Sigh…