Finally, amid media's moral panic and anthropomorphism and sensationalism about tech, someone--namely Molly Roberts--gets ChatGPT et al right: It is just spitting us back at ourselves. We have met Bing and Bing is us.
https://www.washingtonpost.com/opinions/2023/02/17/bing-chat-identity-crisis-programming/
Bing Chat’s identity crisis reflects the mixed messages we’ve given it

Bing is Bing Bing Bing Bing Bing and Bing Bing Bing Bing Bing. Any questions?

The Washington Post
@jeffjarvis Can we start admitting that Sydney and its counterparts are not AI but Machine learning apps. Tht’s why they suck so hard.
@Loucovey @jeffjarvis what actual "AI" isn't machine learning? What #AI used to mean in the 90s is what #AGI means today. And nobody is talking about it because it's 5-100 years out.
@travisfw @jeffjarvis 1/ AI requires three things: a comprehensive, curated, and objective data set; a machine-learning component; and a deep-learning component to provide context and nuance.
@travisfw @jeffjarvis 2/ Microsoft and Google has put out a machine-learning app, driven by a comprehensive but largely uncurated data set and no deep learning aspect. It is, for all intents and purposes an automated conspiracy-theory fan with Tourette's. Using the term "AI" is strictly marketing BS.
@Loucovey @jeffjarvis That sounds about right. Still, I'm sticking with: deep learning is ML and "AI" is always BS. I do think "AI" will gain meaning again when its usefulness in marketing inexorably declines, but who knows how long that will take.
@travisfw @jeffjarvis that’s my point. Almost everything labeled as AI is only ML attached to a database. The worst of these products are mailing list companies who use a mediocre (at best) ML with a truly awful data set and then sell the lists to lazy and/ or incompetent marketers
@Loucovey Say more about your assertion about no deep-learning aspect.
@jeffjarvis The current generative AI apps have a rudimentary deep-learning (DL) component to make their response seem human-like, but a complete DL has the ability to recognize error. So far ChatGPT and Google's version have not demonstrated that ability. That is due to the lack of curation in the massive data set they use. These "AIs" are parroting information. It's been compared to "spitting ourselves in the face". A real AI would be able to recognize error.
@jeffjarvis /2 That is why successful AI's (they do exist) are highly focused. The data has been curated, vetted, and constantly updated. They work for cybersecurity because there is a lot of good data available on malware and social engineering examples. There is also an intended bias leaning toward the protection of a system, so there are more false positives than negatives. ChatGPT lacks that ability because the data set is so flawed negating the affect of it's limited DL