I see we are entering the ‘humans make mistakes too’ era of LLM-AI apologetics

And I really wish I had written the review of Alan Blackwell’s ‘Moral Codes’ I had wanted to write when it was still timely, because yes, the point is that our expectations of machine intelligence are completely different from those of human intelligence

Not enough is made of the fact that ‘ChatGPT passes the Turing test’ isn’t news because ELIZA already passed it, and *really* not enough is made of the fact that it should be bloody obvious that human intelligence is flawed in ways that we clearly do not want to recreate in a machine to the extent of being indistinguishable from a human
@dpk 'Machine made with knowledge of how to pass a test as part of it's dataset knows how to pass a test.' Shocker! The journalists covering things like this and Anthropic's latest self serving ethics themed drivel are reminiscent of the days when the media used to cover chess computers, designed for playing chess, being really good at playing chess. It's more impressive if you don't think too hard about it and just accept the PR stunt the way the friendly press have.
@Rycochet @dpk chess computers actually were worth covering tho - "teaching" a computer how to do that well enough to beat the best human players was a massive accomplishment that took decades and the folls who made it happen do deserve a lot of credit even if it ultimately was just a PR project

@ratsnakegames @Rycochet @dpk @kkarhan

Even more impressive is that when the computer beat Kasparov, it was a purpose built supercomputer and it still lost a few times.

Now you can run stockfish on your laptop and no one can beat it.