Systematic Approaches to Learning Algorithms and Machine Inferences (SALAMI)
@Gargron
Nobody says #ArtificialSmartness.
Just sayin’
Maybe it should be consistently written as "AI" instead of as AI.
It's another round of Wall Street hype by anti-democracy billionaires, no different than NFT's or cryptocurrency - just another scam.
@Gargron As usual, rms is telling the truth and nobody listens ;-)
"I can't foretell the future, but it is important to realize that ChatGPT is not artificial intelligence. It has no intelligence; it doesn't know anything and doesn't understand anything. It plays games with words to make plausible-sounding English text, but any statements made in it are liable to be false. It can't avoid that because it doesn't know what the words _mean_."
@ilgaz @Gargron wow, that sounds surprisingly similar to the average human. You only have to look at recent election results in just about every western democratic nation to see proof of that.
It’s so reductive to talk about AI as snake oil but at the same time attribute intelligence to human beings who are generally showing nothing of the sort 😂
@Gargron
The deceit doesn't start there, but in the A. None of this is any more artificial than most of what surrounds us, certainly all of our software. It's automation. It's also, in most cases, inference. So Automated Inference.
The questions are what is being automated, who stands to benefit, who is at risk, and what are the guardrails around.
@Gargron
I have been calling it an algorithmic tool. I like @emilymbender suggestion to reference it as automation.
Automation has always displaced labor.
LLM tools that help ESL students identify grammatical hiccups in their papers are *useful*.
LLM tools that help me brainstorm a “how to” document for undergraduates are *useful* especially when they remind me of things I’d forgotten to include. (Too close to the material: remembering what you used to not know is hard.)
In a sweeping generalization, almost all tools have Dr Jekyll/Mr Hyde characteristics.
@Gargron this is the same problem we had when expert systems were called "AI" https://en.m.wikipedia.org/wiki/Expert_system
I guess the temptation to think a problem is solved is too high.
At least we're consistent in calling rubbish systems "intelligent" 😂
A college professor of mine back in 1983 said "'AI' is what we call software we don't know how to write yet." I think this neatly captures the problem we have talking about current "AI". In 2000, nobody knew how to write software that would drive cars, write poetry, play grandmaster-level chess, or summarize text, so those were considered to be examples of what AI might accomplish. Now we know how to write systems that do those things, so they are no longer AI.
@ahltorp @isomeme @Gargron @darylgibson well, not *good* poetry, anyway. 😉
I weep for humanity that so many people have been impressed with the level of “art” these LLMs and generative art (pixel plagiarism) machines spit out. This is what happens when we fail to properly teach the humanities in school.
@KydiaMusic @ahltorp @Gargron @darylgibson
AIs aren't producing great art (yet), but they're easily outperforming the average human. I've seen a few AI-generated works that were quite compelling. As one of my favorite proverbs puts it, the amazing thing about a dancing bear is not how *well* it dances, but that it dances at all.
@KydiaMusic @ahltorp @Gargron @darylgibson
Absolutely. But the number of capabilities that are unique to humans will continue to decrease as AI technology advances. What happens when an AI can write a poem that reduces you to tears with its emotional punch? Pinning our claim to sentience on what computers can't do runs into the same problem as the "God of the Gaps" approach in theology.