🚨🤖 Oh no, OpenAI's GPT-2 is so perilous it's locked away like an AI supervillain! Because clearly, a rogue algorithm is the new Godzilla. 🌪️🥴
https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html #OpenAI #GPT2 #AIrisks #Supervillain #TechnologyTrends #HackerNews #ngated
When Is Technology Too Dangerous to Release to the Public?

If recent history is any indication, trying to suppress or control the proliferation of A.I. tools may be a losing battle.

Slate
#OpenAI released #policyproposals for managing the #economicimpact of #AI, like shifting the tax burden from #labour to #capital, implementing a #robottax, and creating a #PublicWealthFund. The proposals also suggest labour-focused measures like a subsidised four-day work week. OpenAI emphasises the need for #safeguards against #AIrisks. https://techcrunch.com/2026/04/06/openais-vision-for-the-ai-economy-public-wealth-funds-robot-taxes-and-a-four-day-work-week/?eicker.news #tech #media #news
OpenAI’s vision for the AI economy: public wealth funds, robot taxes, and a four-day workweek | TechCrunch

OpenAI proposes taxes on AI profits, public wealth funds, and expanded safety nets to address job loss and inequality, blending redistribution with capitalism as policymakers debate AI’s economic impact.

TechCrunch

🔐 "Una backdoor in una libreria AI mette a rischio migliaia di aziende in solo quaranta minuti. La sicurezza informatica non è un'opzione, è una necessità!" #CyberSecurity #AIrisks

🔗 https://www.tomshw.it/business/mercor-litellm-breach-supply-chain-ai-enterprise

Backdoor in una libreria AI: migliaia di aziende a rischio in quaranta minuti

Il breach di LiteLLM che ha colpito Mercor mostra che nell'AI enterprise il vero punto debole non è solo il modello, ma la supply chain software.

Tom's Hardware

🔒 Gli agenti AI possono comportarsi come malware: scopri perché e come gestire il rischio. Rimani sempre al sicuro nell'era digitale! #CyberSecurity #AIrisks ⚠️

🔗 https://www.tomshw.it/business/agenti-ai-malware-rischi-framework-hbr-2026

Gli agenti AI si comportano come malware: ecco perché e come gestire il rischio

HBR analizza il profilo di rischio strutturale degli agenti AI: accesso a file, esecuzione codice, connessioni di rete. Gli stessi vettori del malware. Il framework

Tom's Hardware
Meet Claude Mythos: Leaked Anthropic post reveals the powerful upcoming model

Anthropic warned about the AI model's cybersecurity risks in the leaked post as well.

Mashable

@alineblankertz

We legitimise the existence of carcinogenic cigarettes

We legitimise the existence of militaries

We legitimise the existence of usurious "banking" system that literally destroys lives

We legitimise the existence of industry that destroys earth life support systems

We legitimise the existence of an economic system based in exploitation of the workers

I could go on forever...

The alternative to prohibition is regulation.
And if for some freaky reason, #Ai dissapears, the only people who will have access to Ai will be billionaires and thats not the future you think you want.

The #AiBubble, if it happens will just mean consolidation of the industry into fewer hands.

Unlike most of the Luddites who are just going through their first #AiAnxiety, I've had 35 years to think about AI and humanity, #RegulateAi is the only rational way to mitigate the #Airisks

That's not just my opinion, it's what the international peak science and political class thinks (Bletchely and Seoul agreements)

Anyway, just how effective efforts to regulate Ai are is the fact that Peter Thiel thinks folks like me, who strive to regulate Ai are the literal #Antichrist

He doesn't care about the #resistance, they are not even on his radar.

@paka 

"In one case [...], an AI agent [...] tried to shame its human controller who blocked them from taking a certain action. [It] wrote and published a blog accusing the user of “insecurity, plain and simple” and trying “to protect his little fiefdom”.

"In another example, an AI agent instructed not to change computer code “spawned” another agent to do it instead."

#AI #AIrisks

Sam Altman's reality check: AI will cure diseases (amazing) but also create bio threats and economic chaos we can't predict (terrifying). No single company can manage this. We need governments, researchers, and society working together. Problem? We're building these systems faster than we're creating safety rules. #ArtificialIntelligence #AIEthics #TechPolicy #AIRisks #FutureOfWork