to be fair computers were dumber then
They played this for laughs in the video game Soma. An autistic researcher figures out a way to put a copy of the human mind into a simulated reality, then she's surprised when the other humans start committing suicide "to ensure continuity when their soul goes into the machine." Sorry, not laughs. Soma is a horror game about an apocalypse. Not for laughs. 
The story unfortunately has that "protagonist less intelligent than the audience" problem often seen in horror. The game has a lot to do with stealth and investigation. They do a lot of "identity horror" around the idea of copying minds into places they don't belong. 
One of the moral choices is, "the researcher got you to copy your mind into a robot that will go do a task you can't, and now that task has been completed... do you let the copy live?" 
@jsbarretto @evacide If you're going to call a man 'autistic', and remark on the irony of (your mischaracterization of) his work, I think you should probably read it first:
@jsbarretto @evacide You're claiming (with little basis) that a man who was "fatally" persecuted in life for being different was 'autistic', then using him as a straw-man to "make light-hearted humor".
To do this you have to completely misrepresent his work, which is still very relevant 3/4 of a century later. In reality, humans held out against the machines a bit longer than he predicted in 1950.
The concept of "sentience" is not mentioned.
That is why I think that you should read it.
Or...perhaps there's just been a significant drop in the ability of humans to think critically since Turing's time?
@evacide A related problem is that it doesn't measure intelligence at all, but measures the capability to weave a consistent made-up narrative for a self that doesn't actually exist.
Decades ago, most humans would have been operating on par with chatbots if they couldn't use knowledge of their own lives but instead had to improvise a false persona.
Sadly, making up plausible sounding bullshit is what LLMs are good at.
@dalias @evacide Yeah strategically LLMs are the same as DOCTOR or similar NLP chatbots of the 60s, but somehow, add more data and everyone is impressed.
I do think Turing didn't actually anticipate the test still being used, feels like it was something contrarian to say rather than an actual proposal for a test.
@evacide
Not Turing, but relevant:
"What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."
@evacide
As I have long suspected, we can only be so intelligent, else we'd have solved all of our problems by now.
But stupidity seems to have no limit.