...On Saturday, the new pope made some surprise appearances, after meeting with cardinals to detail his vision, identifying artificial intelligence as one of our biggest threats, and embracing his predecessor's ideals of a more inclusive church...
Full story:
https://www.cbsnews.com/news/the-papacy-of-leo-xiv-begins/
#aiishype #humanimitation #humanimitationengine #hie #aiismisnomer #noai #aihype

The papacy of Leo XIV begins
Robert Prevost, a tennis-loving, Wordle-playing White Sox fan from Chicago, is now leader of the world's nearly 1.5 billion Catholics. Vatican observers describe what the election of Leo XIV, the first pope from America, means for the faithful, and the world.
In a groundbreaking feat of
#AI wizardry,
#UCSD triumphantly declares that their language models have finally learned to convincingly imitate humans. π Apparently, these digital chatterboxes are now capable of hoodwinking us mere mortalsβbecause who needs real human interaction anyway? π€π¬
https://arxiv.org/abs/2503.23674 #Innovation #LanguageModels #HumanImitation #TechTrends #HackerNews #ngated
Large Language Models Pass the Turing Test
We evaluated 4 systems (ELIZA, GPT-4o, LLaMa-3.1-405B, and GPT-4.5) in two randomised, controlled, and pre-registered Turing tests on independent populations. Participants had 5 minute conversations simultaneously with another human participant and one of these systems before judging which conversational partner they thought was human. When prompted to adopt a humanlike persona, GPT-4.5 was judged to be the human 73% of the time: significantly more often than interrogators selected the real human participant. LLaMa-3.1, with the same prompt, was judged to be the human 56% of the time -- not significantly more or less often than the humans they were being compared to -- while baseline models (ELIZA and GPT-4o) achieved win rates significantly below chance (23% and 21% respectively). The results constitute the first empirical evidence that any artificial system passes a standard three-party Turing test. The results have implications for debates about what kind of intelligence is exhibited by Large Language Models (LLMs), and the social and economic impacts these systems are likely to have.
arXiv.org