https://www.aspendigital.org/report/ai-101/
#MachineLearning #AutomatedReasoning #Data
Some dreams are eerily realistic. Sometimes, you might not even realize that something you remember came from a dream for several hours or even days afterward. But that doesn't change the fact that it was a dream.
This is part of why I take issue with the idea of #AI hallucinations. Currently, EVERYTHING that #GenerativeAI produces is a hallucination, no matter how believable or accurate. The idea that only the obviously incorrect things are "hallucinations" is misleading.
I liked this thread on non-personifying ways to talk about Large Language Models like ChatGPT - which remix earlier writing based on probability & similarity, like a more elaborate version of a phone keyboard's "guess the next word".
The language we use can help people to remember that they're not actually "intelligent" and don't have any common sense!
https://mastodon.publicinterest.town/@b_cavello/110947429317025603
Another way to #TalkBetterAboutAI is to try to avoid personifying language. This is TOUGH. Personifying stuff is so useful for explaining things. It's really hard to avoid, and often explaining things more accurately can take a couple more words. It is tough. But I think it's worth it, too. By talking about these tools as tools, it helps us recognize both our own agency and impact as users of the tools, but it also can help clarify the role of the people who created these tools.