The Turing Test poisoned the minds of generations of AI enthusiasts, because its criteria is producing text that persuades observers it was written by a human.

The result? Generative AI text products designed to "appear real" rather than produce accurate or ethical outputs.

It *should* be obvious why it's problematic to create a piece of software that excels at persuasion without concern for accuracy, honesty or ethics. But apparently it's not.

@intelwire We produce unethical and misleading humans at a staggering rate too. We can soon imagine language systems that could pass rigorous thesis defense panels. Would they remain “language systems?” I believe the core of your unease would remain. It is probably something worth of articulating if you can.
@knowuh My unease is associated with efforts that, so far at least, are very apt to replicate humanity's worst traits and biases