The Turing Test poisoned the minds of generations of AI enthusiasts, because its criteria is producing text that persuades observers it was written by a human.

The result? Generative AI text products designed to "appear real" rather than produce accurate or ethical outputs.

It *should* be obvious why it's problematic to create a piece of software that excels at persuasion without concern for accuracy, honesty or ethics. But apparently it's not.

@intelwire Was that the point of the test? I’m confused.
@intelwire @skry Why would they make it that way? Turing would not have approved.
Turing test - Wikipedia

@skry @schoolingdiana @intelwire Turing's original paper is about how behavioural testing is useless in determining intelligence. He never said, "use this test to determine if machines are intelligent". He meant the opposite: don't even bother, since you can never know if it is real intelligence or something pretending to be intelligent.

https://academic.oup.com/mind/article/LIX/236/433/986238

I.—COMPUTING MACHINERY AND INTELLIGENCE

I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definit

OUP Academic
@szakib @skry @schoolingdiana To be clear, I'm not blaming Turing
@intelwire @skry @schoolingdiana I didn't think you were, this was more of a "yes, and".
@szakib
I have just been listening to "Alan Turing: Enigma" and the author seemed to be trying to paint his approach as pragmatism. Sort of you will never be able to tell if an AI is truly intelligent so if it is good enough that you can't decide then it is good enough that it should be considered intelligent.

@szakib @skry @schoolingdiana I don't buy this -- he spent a lot of time dealing with potential exceptions (including "but what if ESP"! and more functional ones) to have meant it as a "this shows machine intelligence isn't Real" argument. (cf. his discussion of Lady Lovelace's Objection, and later of Learning Machines.)

I think he rejected the distinction between "real intelligence" and "pretending to be intelligent" -- he argues that that pretending is a task which requires at least as much intelligence, itself.

(*Hot Take voice but like I do actually believe this* This Is Because Turing Was Gay.)

@gaditb @skry @schoolingdiana We certainly agree that he went to great lengths to prove that real and pretend intelligence are indistinguishable from the outside and the famous test is a thought experiment for establishing this.

I am not a Turing/history expert, but to me it seems that he thought that this made the question useless: why ask if machines can be "intelligent" if we cannot actually answer?

(N.B. we still don't have a good definition for "intelligent".)

@szakib @gaditb @skry @schoolingdiana

(No, we don't. But I sometimes know un-intelligent when I see it.)

@szakib @skry @schoolingdiana I'm no Turing historian either, I'm just going based on my reading of the paper.

We're definitely agreeing on that, I think I feel like if he is arguing for the possibility of souls for machine intelligences, he's not taking a neutral position on there being a category of "pretend". (But maybe I'm just projecting here.)

I think he's... not trying to give a DEFINITION, per se, but to establish a Sufficient Condition towards EVENTUALLY a definition of "intelligent" in general.
(Or at least towards some manner of categorization, which "having a definition" is one manner of that.)
(Like, "and Intelligence" is, just by itself, part of the title.)

@skry @szakib @schoolingdiana @gaditb
And I think Eliza and her descendants including the LLMs have shown just how easy it is to fool most humans.

@szakib

Thank you for that link. In recent years I have often shared @intelwire's frustration over AI/ML's obsession with the Turing test & the predictably problematic results, so it's good to roll back to Turing's actual paper!

But I'm not sure your conclusion is more valid than the pop-culture one. He clearly argues that "Can machines think?" is functionally equivalent to "can they imitate human responses to questions?" and then proposes machine learning theory & urges its exploration.

@schoolingdiana @intelwire @skry
When developing software you tend to get what you test for. If people are only using the Turing test to evaluate their AI software, they will end up with something that seems human but may not be accurate or fair.
@AdamDavis @schoolingdiana @intelwire True, which is one reason why the Turing test is no longer seriously considered. The other is that we've already seen AIs blow past that threshold.
@skry @AdamDavis @schoolingdiana I'm thinking of it as more of a cultural artifact than the thing in itself. The idea that success for a generative AI is a humanlike presentation and everything else is a minor detail that can be worked out later. i.e. the Yann LeCun attitude.
@intelwire @AdamDavis @skry @schoolingdiana “success… is a humanlike presentation and everything else is a minor detail that can be worked out later” sounds like a lot of political campaigns…

@AdamDavis @schoolingdiana @skry @intelwire

With that description of the Yann LeCun attitude, I suddenly have a Tom Lehrer lyric stuck in my head…

“If the rockets go up, who cares where they come down? That's not my department" says Wernher von Braun

@AdamDavis @schoolingdiana @intelwire @skry

Are you implying that humans are accurate and fair? Not the ones I meet.

@rrb @schoolingdiana @intelwire @skry No, they're not. But if you're building an AI to be a source of information or to improve an existing process, then accuracy and fairness are important.

@AdamDavis @schoolingdiana @intelwire @skry

Yes, but then you should not be using the Turing test for acceptance, right?

@rrb @schoolingdiana @intelwire @skry True. I'm not saying that people should be doing Turing-like tests, I'm saying that product leads are often more concerned with their AI systems appearing to be human than anything else.

I'm also suggesting that (in general) if you develop software and you don't test for certain features then you don't value those features.

@AdamDavis @schoolingdiana @intelwire @skry

Agreed. If you look at the failure modes, it always seems to affect people that would not be in the C-suites of companies.

Like facial recognition that works well on white/asian males, but finds that all dark skinned people look alike. Not to mention women.

@schoolingdiana @intelwire The point of the test was a thought experiment to get people to thinking seriously about the possibility of machine intelligence.
@intelwire Same difference as corporate America. Ladder climbing backstabbing and credit stealing all the way into the C-suites, committing crimes with rare consequences seen by boards n shareholders as a cost of doing business, if they’re ever faced, when the upper class has captured both political parties and the regulators
@intelwire I stand by the comparison of the Turing Test to the Bechdel Test: thought experiments in service of a larger point not actual methodologies for determining if something is good or bad
@intelwire Persuasion without accuracy, honesty, or ethics is also known as undue influence or coercive persuasion. Known in the vernacular as “brainwashing.”
@corbden @intelwire Funny that ml proponents love to counter with their techniques for detecting bias. Methinks something got dropped along the way
@intelwire We produce unethical and misleading humans at a staggering rate too. We can soon imagine language systems that could pass rigorous thesis defense panels. Would they remain “language systems?” I believe the core of your unease would remain. It is probably something worth of articulating if you can.
@knowuh My unease is associated with efforts that, so far at least, are very apt to replicate humanity's worst traits and biases

@intelwire

Turing was not interested in "proving" if machines could "think'

He simply postulated that If a human could have a significantly long conversation with a machine, without realizing it was a machine, then it was irrelevant whether or not the machine was actually "thinking" anymore then you know your neighbour is actually "thinking".

Turing called it "the imitation game" others called it the "Turing test"

@geekwisdom I know, but I would argue a) it's become the rhetorical (if not actual) standard for assessing AI thinking, and b) he proposed it because it was too hard to figure out if a machine was actually thinking.
@intelwire @geekwisdom this reminds me all over again of that fear that someone or something will convince us that we can upload our consciousness onto something

and an entire civilization ends up being wiped out and replaced by p-zombies

@intelwire

The problem in my opinion is we can "see" the code that makes software work. When we don't know how to make a machine do something, we assume it must require human "intelligence". Then someone writes am algorithm that does it everyone looks at it and says "oh I guess it doesn't require intelligence after all.

@intelwire @geekwisdom The latter point actually presaged the notion of the “hard problem of consciousness”.
@geekwisdom @intelwire Or, as Edsger Dijkstra put it, “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
At some point, the question of whether machines are “thinking” becomes academic navel-gazing, when the practical question is whether machines can solve many problems that were traditionally believed to require human intelligence, and the answer is unequivocally “Yes”.
@geekwisdom @intelwire The problem becomes the danger of such specialized problem-solvers divorced from any understanding of the consequences of their actions, much less possessing any moral framework, deployed in ways able to effect real-world change. It falls to humans to act as the moral safeguards, but we are often direly lacking in that respect, eager to reap rewards now and weather consequences later.
@geekwisdom @intelwire I used to think Philosophical Zombies were an impossibility, that consciousness naturally emerges as a consequence of intelligence, but now we find ourselves staring alien beasts of our own creation straight in the face. Not strange minds, but something else entirely, clanking facades hiding a vast hollow nothingness, forcing us to confront that maybe our own sense of self is just an aberration and perhaps consciousness does not, in fact, convey any advantage.

@Mapache

@intelwire

I would argue humans have been making real world changes divorced from the consequences of their actions from the start and in that respect are much the same as any AI

@intelwire the bad part is that most of the current AI engines are "trained" (could be more accurately called "tuned") by scraping material from the net without consideration or permission of the people "providing" the training dataset. Plus, it's still a GIGO operation.
@intelwire Confront machine with observer who spots inaccuracies and values ethics over persuasion.
@intelwire The problematic part is really the paradigm of behaviorism, which still pervades the AI field. In behaviorism the only thing that matters is the surface stuff - the input to output transform.

@intelwire The imitation game was not anything like "generative AI text". At it's heart is a conversation, a "game". The turn-taking is an essential element of it.

Chess was a test of intelligence, right up until Deep Blue beat Kasparov. Afterwards the machine was called "software that excels" at a narrow topic.

We are probably learning that essays are not a test of intelligence.

@intelwire Turing did not create the problem of texts that excel at persuasion without concern for accuracy, honesty or ethics. If anything, Turing showed how to alleviate the problem, by making the Imitation Game a conversation, as @Heterokromia points out. Call it cross-examination if you will.
@intelwire There is significant overlap between the bell curves of most advanced AI and least articulate humans.
@intelwire I think one thing that AI's are missing is the desire to exist (and also the knowledge of their existence). From this, humans (and many - most??all?? animals ) derive motivation to interact, create, think, love. Without this, an AI can never truly contribute to a conversation, because it has no skin in the game. It's just playing with words.
@rapsac @intelwire That's its own kind of horror; it has more credibility if its carbon footprint is higher or Impact Factor stays up? How will it equitably serialize itself?
@intelwire it's a clever thought, but at what age are you realizing this?

@intelwire

Is it just me, or does it seem that *everything* in the A.I. space today is an ethical atom bomb waiting to drop?

Here's Dall-E, no need to pay an artist
Here's GPT 3.5, no need to pay a copy writer

"We have freed humanity from the need to work!!"
> "Um, ok, can I have some food?"
"Fuck you!"

@RL_Dane it is not just you

@intelwire

Thanks XD

I try not to bring it up these topics with family, because it turns into,
"SOOOOO, you think everyone else in the world is stupid and crazy!"
> "Eh, feels like it at times."
"HA!"
> %P

@intelwire @RL_Dane Imagine a world where, instead of working on building artificial intelligence, people were working on building artificial empathy.

@mathew @intelwire

Quite. The only issue is that A.I.s are built by corporations, which are almost all sociopaths.

@intelwire To be fair though, many educational systems focus more on getting students to produce convincing arguments than to validate inputs. Debating clubs were the text generators of their day. AI gives us infinitely more excellent, but junk, text.