I see we are entering the ‘humans make mistakes too’ era of LLM-AI apologetics

And I really wish I had written the review of Alan Blackwell’s ‘Moral Codes’ I had wanted to write when it was still timely, because yes, the point is that our expectations of machine intelligence are completely different from those of human intelligence

Not enough is made of the fact that ‘ChatGPT passes the Turing test’ isn’t news because ELIZA already passed it, and *really* not enough is made of the fact that it should be bloody obvious that human intelligence is flawed in ways that we clearly do not want to recreate in a machine to the extent of being indistinguishable from a human

@dpk I watched this great talk with Jaron Lanier where he says:

There are 3 parties in the Turing Test:
- 1 human judge
- 1 human
- 1 machine

Passing the test the traditional view is the machine must've got elevated to the level of humans

However there are 2 more options:
- the human got stupid (lowered himself to the level of the machine)
- the judge got stupid

And tongue-in-cheek, since there are 2 humans, there's a 2/3 chance the humans got stupid, not the machine getting smart

@largo @dpk Interestingly, this inspired me to re-read Turing's paper that started all this ( https://doi.org/10.1093/mind/LIX.236.433 ), and he states in it that he thinks that in "about 50 years" (~2000) it will be possible to beat the Imitation Game with machines then available with about 10^9 bits (~120MB) of RAM. Turing did not seem to think that his test was anywhere near as hard as "we" like to think it is! (He also wasn't *that* far off the capabilities of a computer of the late 1990s...)