I see we are entering the ‘humans make mistakes too’ era of LLM-AI apologetics

And I really wish I had written the review of Alan Blackwell’s ‘Moral Codes’ I had wanted to write when it was still timely, because yes, the point is that our expectations of machine intelligence are completely different from those of human intelligence

Not enough is made of the fact that ‘ChatGPT passes the Turing test’ isn’t news because ELIZA already passed it, and *really* not enough is made of the fact that it should be bloody obvious that human intelligence is flawed in ways that we clearly do not want to recreate in a machine to the extent of being indistinguishable from a human
Though I shouldn’t say ‘flawed’ because I don’t think being influenced by emotion is a flaw: it has the only thing that has given us meaningful art for the last 20,000 years, and may be a significant root of our ethical reasoning
@dpk I would also add that we are able to intuit the ways humans are flaws, while the ways that AIs are flawed are unique and much harder for us to intuit.
@ainmosni AI doesn’t exist
@dpk Indeed, LLM is the better term.
@dpk 'Machine made with knowledge of how to pass a test as part of it's dataset knows how to pass a test.' Shocker! The journalists covering things like this and Anthropic's latest self serving ethics themed drivel are reminiscent of the days when the media used to cover chess computers, designed for playing chess, being really good at playing chess. It's more impressive if you don't think too hard about it and just accept the PR stunt the way the friendly press have.
@Rycochet @dpk chess computers actually were worth covering tho - "teaching" a computer how to do that well enough to beat the best human players was a massive accomplishment that took decades and the folls who made it happen do deserve a lot of credit even if it ultimately was just a PR project

@ratsnakegames @Rycochet @dpk @kkarhan

Even more impressive is that when the computer beat Kasparov, it was a purpose built supercomputer and it still lost a few times.

Now you can run stockfish on your laptop and no one can beat it.

@dpk I watched this great talk with Jaron Lanier where he says:

There are 3 parties in the Turing Test:
- 1 human judge
- 1 human
- 1 machine

Passing the test the traditional view is the machine must've got elevated to the level of humans

However there are 2 more options:
- the human got stupid (lowered himself to the level of the machine)
- the judge got stupid

And tongue-in-cheek, since there are 2 humans, there's a 2/3 chance the humans got stupid, not the machine getting smart

@largo @dpk Interestingly, this inspired me to re-read Turing's paper that started all this ( https://doi.org/10.1093/mind/LIX.236.433 ), and he states in it that he thinks that in "about 50 years" (~2000) it will be possible to beat the Imitation Game with machines then available with about 10^9 bits (~120MB) of RAM. Turing did not seem to think that his test was anywhere near as hard as "we" like to think it is! (He also wasn't *that* far off the capabilities of a computer of the late 1990s...)
Gilfoyle Hacks Jian Yang's Smart Fridge 🤓 Silicon Valley

YouTube

@dpk One of the sillier things I keep hearing is when people marvel how a chatbot is able to pass a "difficult exam", e.g. there was a flurry of breathless headlines about ChatGPT passing "the bar exam" as a sign of its advancing intelligence.

And lo and behold, every time it's a standardised exam that has been documented in countless books and online training material that were previously fed to the LLM.

@hzulla @dpk ha yes, should be - chatbot passes exam by making it an open book exam and having perfect recall.

@dpk

The test really should be seen as a thought experiment to open people up to the idea of machine reasoning, not a serious test for human equivalence. Humans are too easy to gaslight with even simple algorithms.