@travisfw @jeffjarvis 1/ AI requires three things: a comprehensive, curated, and objective data set; a machine-learning component; and a deep-learning component to provide context and nuance.
@travisfw @jeffjarvis 2/ Microsoft and Google has put out a machine-learning app, driven by a comprehensive but largely uncurated data set and no deep learning aspect. It is, for all intents and purposes an automated conspiracy-theory fan with Tourette's. Using the term "AI" is strictly marketing BS.
@Loucovey Say more about your assertion about no deep-learning aspect.
@jeffjarvis The current generative AI apps have a rudimentary deep-learning (DL) component to make their response seem human-like, but a complete DL has the ability to recognize error. So far ChatGPT and Google's version have not demonstrated that ability. That is due to the lack of curation in the massive data set they use. These "AIs" are parroting information. It's been compared to "spitting ourselves in the face". A real AI would be able to recognize error.
@jeffjarvis /2 That is why successful AI's (they do exist) are highly focused. The data has been curated, vetted, and constantly updated. They work for cybersecurity because there is a lot of good data available on malware and social engineering examples. There is also an intended bias leaning toward the protection of a system, so there are more false positives than negatives. ChatGPT lacks that ability because the data set is so flawed negating the affect of it's limited DL