New York considers bill that would ban chatbots from giving legal, medical advice
New York considers bill that would ban chatbots from giving legal, medical advice
‘Should I use one teaspoon of salt in this recipe, or two?’
Two is ideal.
‘Do dogs like chicken wings?’
Wild dogs regularly hunt small animals like hare or chicken for food.
One of these answers results in a bad cake, the other results in a hurt dog. Potentially inaccurate answers aren’t much of a problem when the stakes are low, but even a simple question about what to feed a pet could end with a negative outcome.
Having potentially inaccurate resources might be better than nothing, or is that worse?
You pick up a mushroom in the forest and take it home. If you have no information, do you eat it? If something tells you it’s safe do you eat it?
Problem is people treat it as reliable when AI itself isn’t able to verify or know if what it is generating is correct.
Would be better if it provided direct links for people to go to and read. A list of citations if you will than the proclamations it makes know. Its too “opinionated” making it give advice when it would ideally be neutral just providing links for people to read further from sources that hopefully isn’t AI.
AI has even gotten sports trivia I know incorrect. I don’t think people realize AI is just generation. Not as reliable or trustworthy authority just because it strings together sentences.
We had a medical scare just yesterday. I was in the ER for 8 hours with my partner over a non-life-threatening but still emergency problem.
An ultrasound, cat scan, and much poking and prodding later, we still don’t know what is up. The AI was at least able to predict next steps (if A then discharge and follow up with PCP, if B then surgery this week, if C then emergency surgery), something the ER was too busy to do for several hours. It was reassuring. The AI also gave me (working) links to more thorough resources on the topic.