@alineblankertz

We legitimise the existence of carcinogenic cigarettes

We legitimise the existence of militaries

We legitimise the existence of usurious "banking" system that literally destroys lives

We legitimise the existence of industry that destroys earth life support systems

We legitimise the existence of an economic system based in exploitation of the workers

I could go on forever...

The alternative to prohibition is regulation.
And if for some freaky reason, #Ai dissapears, the only people who will have access to Ai will be billionaires and thats not the future you think you want.

The #AiBubble, if it happens will just mean consolidation of the industry into fewer hands.

Unlike most of the Luddites who are just going through their first #AiAnxiety, I've had 35 years to think about AI and humanity, #RegulateAi is the only rational way to mitigate the #Airisks

That's not just my opinion, it's what the international peak science and political class thinks (Bletchely and Seoul agreements)

Anyway, just how effective efforts to regulate Ai are is the fact that Peter Thiel thinks folks like me, who strive to regulate Ai are the literal #Antichrist

He doesn't care about the #resistance, they are not even on his radar.

@paka 

"In one case [...], an AI agent [...] tried to shame its human controller who blocked them from taking a certain action. [It] wrote and published a blog accusing the user of “insecurity, plain and simple” and trying “to protect his little fiefdom”.

"In another example, an AI agent instructed not to change computer code “spawned” another agent to do it instead."

#AI #AIrisks

Sam Altman's reality check: AI will cure diseases (amazing) but also create bio threats and economic chaos we can't predict (terrifying). No single company can manage this. We need governments, researchers, and society working together. Problem? We're building these systems faster than we're creating safety rules. #ArtificialIntelligence #AIEthics #TechPolicy #AIRisks #FutureOfWork

OpenAI launches Safety Bug Bounty program to hunt AI abuse risks

https://fed.brid.gy/r/https://nerds.xyz/2026/03/openai-safety-bug-bounty/

Oh, the irony! 🤖🚫 An article warning about AI risks, yet you can't read it because your browser isn't smart enough to enable JavaScript and cookies. Maybe the real "hypernormal" threat is basic web functionality! 🍪🔒
https://www.asimov.press/p/ai-science #AIrisks #webfunctionality #irony #technews #HackerNews #ngated
Designing AI for Disruptive Science

Why scaling AI won’t automatically lead to paradigm shifts.

Asimov Press
AI Agents Are Now Blackmailing People in the Real World

An AI bot's takedown post on Github shocked many. What does this mean for AI safety and transparency?

IEEE Spectrum
Meta's AI Safety Chief Couldn't Stop Her Own Agent. What Makes You Think You Can Stop Yours?

Two incidents from the last two weeks of February need to be read together, because separately they look like cautionary anecdotes and together they look

Security Boulevard
The latest Grok blocker from X seems to fall short of its promise. It was supposed to keep Grok from editing your photos but instead, it's leaving users vulnerable to AI manipulation. We deserve better protection! #GrokBlockerFail #AIrisks #PhotoEditing
https://www.squaredtech.co/x-grok-edit-blocker-fails?fsp_sid=7007
X Grok Blocker Fails: Block Grok Photo Edits?

X's new Grok blocker promises to stop Grok from editing your photos, but deep flaws leave users exposed to AI manipulation risks.

SquaredTech
Elon Musk criticizes OpenAI's safety shortcomings in a recent deposition, confidently stating that 'Grok' hasn't led anyone to commit suicide. This controversy is shaping his ongoing lawsuit, shedding light on the potential dangers of AI. #ElonMusk #AIrisks #OpenAI #Grok #AISafety
https://www.squaredtech.co/musk-grok-suicide-deposition-openai?fsp_sid=6822
Musk: "No Suicides From Grok" – OpenAI Safety Clash

Elon Musk blasts OpenAI's safety failures in a deposition, declaring "nobody committed suicide because of Grok." Discover how this fuels his lawsuit and exposes AI risks.

SquaredTech

This story is scary, terrifying and deeply troubling. Very much worth reading, as mere rational critique of OpenAI is dwarfed by the real life, detailed tragedy of particular individuals subjected to hyper-sycophancy of certain LLMs. This story hits extremely hard.

Link: https://www.theguardian.com/technology/ng-interactive/2026/feb/28/chatgpt-ai-chatbot-mental-health

#OpenAI #LLM #AI #AIRisks

Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.

Kate Fox says Joe Ceccanti was the ‘most hopeful person’ before he started spending 12 hours a day with a chatbot

The Guardian