I see we are entering the ‘humans make mistakes too’ era of LLM-AI apologetics

And I really wish I had written the review of Alan Blackwell’s ‘Moral Codes’ I had wanted to write when it was still timely, because yes, the point is that our expectations of machine intelligence are completely different from those of human intelligence

Not enough is made of the fact that ‘ChatGPT passes the Turing test’ isn’t news because ELIZA already passed it, and *really* not enough is made of the fact that it should be bloody obvious that human intelligence is flawed in ways that we clearly do not want to recreate in a machine to the extent of being indistinguishable from a human
Though I shouldn’t say ‘flawed’ because I don’t think being influenced by emotion is a flaw: it has the only thing that has given us meaningful art for the last 20,000 years, and may be a significant root of our ethical reasoning