The Turing Test poisoned the minds of generations of AI enthusiasts, because its criteria is producing text that persuades observers it was written by a human.

The result? Generative AI text products designed to "appear real" rather than produce accurate or ethical outputs.

It *should* be obvious why it's problematic to create a piece of software that excels at persuasion without concern for accuracy, honesty or ethics. But apparently it's not.

@intelwire the bad part is that most of the current AI engines are "trained" (could be more accurately called "tuned") by scraping material from the net without consideration or permission of the people "providing" the training dataset. Plus, it's still a GIGO operation.