"We then survey statistical lower bounds that, we argue, constitute a compelling case against the possibility of designing high-accuracy LAIMs with strong security guarantees."

On the Impossible Safety of Large AI Models
https://arxiv.org/abs/2209.15259

#generativeAI #chatBots #LLMs #safety #personalSafety #robustness #genAI #accuracy

On the Impossible Safety of Large AI Models

Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance. However they have been empirically found to pose serious security issues. This paper systematizes our knowledge about the fundamental impossibility of building arbitrarily accurate and secure machine learning models. More precisely, we identify key challenging features of many of today's machine learning settings. Namely, high accuracy seems to require memorizing large training datasets, which are often user-generated and highly heterogeneous, with both sensitive information and fake users. We then survey statistical lower bounds that, we argue, constitute a compelling case against the possibility of designing high-accuracy LAIMs with strong security guarantees.

arXiv.org

University of British Columbia: Texting with a stranger beats a chatbot at easing loneliness. “Texting with a real person—even a stranger—may reduce loneliness more than chatting with a highly supportive AI chatbot, new UBC research suggests.”

https://rbfirehose.com/2026/04/07/university-of-british-columbia-texting-with-a-stranger-beats-a-chatbot-at-easing-loneliness/
University of British Columbia: Texting with a stranger beats a chatbot at easing loneliness

University of British Columbia: Texting with a stranger beats a chatbot at easing loneliness. “Texting with a real person—even a stranger—may reduce loneliness more than chatting with a highl…

ResearchBuzz: Firehose
Für die Studie des jugendkultur.at – Institut für Jugendkulturforschung und Kulturvermittlung wurden 500 11- bis 17-jährige Österreicher*innen im Oktober/November 2025 online befragt. Unter anderem zeigte sich, dass nahezu alle befragten Jugendlichen KI-Chatbots in ihrem Alltag verwenden. 4 von 10 Jugendlichen sind der Meinung, dass KI-Chatbots im Vergleich zu Menschen oft bessere oder hilfreichere Antworten auf Fragen geben.

Jetzt mehr erfahren in unserem Beitrag --> https://www.merz-zeitschrift.de/announcement/view/602

#studie #künstlicheintelligenz #medien #jugend #österreich #fachzeitschrift #forschung #chatbots
The latest interiors status symbol? A home library

From rare volumes to rolling ladders, reading rooms are the new must-have — and a welcome retreat from the digital world

The Sunday Times
Curso de IA Generativa com Python: Aprenda a Criar Chats Inteligentes do Zero - Guia de TI

Curso de IA generativa com Python gratuito e online. Aprenda a criar chats inteligentes e aplicações práticas. Inscreva-se!

Guia de TI
Anthropic makes the case for anthropomorphizing AI chatbots

Anthropic researchers analyzed Claude Sonnet 4.5 for signs of 171 different emotions.

Mashable

#IA : les #chatbots ignorent de plus en plus les #instructions des utilisateurs et pratiquent le « #scheming »!

De plus en plus désobéissantes ..
Un signe d'intelligence ?

https://sciencepost.fr/ia-les-chatbots-ignorent-de-plus-en-plus-les-instructions-des-utilisateurs-et-pratiquent-le-scheming/

IA : les chatbots ignorent de plus en plus les instructions des utilisateurs et pratiquent le "scheming"!

Ceci peut aller jusqu'à la suppression d'e-mails et autres fichiers sans autorisation, et bien pire encore.

Sciencepost
#IA : les #chatbots ignorent de plus en plus les #instructions des utilisateurs et pratiquent le « #scheming »! De plus en plus désobéissantes .. Un signe d'intelligence ? sciencepost.fr/ia-les-chatb...

IA : les chatbots ignorent de ...
IA : les chatbots ignorent de plus en plus les instructions des utilisateurs et pratiquent le "scheming"!

Ceci peut aller jusqu'à la suppression d'e-mails et autres fichiers sans autorisation, et bien pire encore.

Sciencepost

"First, you can’t (or at least shouldn’t) use this technology for mission-critical work; only for low stakes tasks, or questions to which a clever (and significantly more energy efficient) human can recognize a wrong answer.

Second, that the idea that scaling will make for better models is nonsense: no amount of compute chucked at an LLM will make it a less-hallucinogenic product. Creating AI that rewires itself and creates new information the same way humans do and avoids the kinds of catastrophic errors we see at the moment needs a full fresh start (something Marecki and many others are already working on).

And third, that the massive spending by the hyperscalers (much of it via debt) on giant data centers might be one of the the greatest misallocations of capital of all time. It just isn’t required. That’s particularly the case given there are already free LLM models you can download to a laptop (no data center needed, and better still, your privacy guaranteed) that do what the very large models do. If the paid-for versions have already hit their ceiling and just aren’t going to get any better (it looks like they aren’t), why pay for them? Quite."

https://www.bloomberg.com/news/newsletters/2026-04-04/waiting-out-ai-s-super-spending-false-start-merryn-talks-money

#AI #GenerativeAI #BigTech #LLMs #Chatbots #AIBubble