Software Harm Reduction

genAI code is now in Python, curl, and systemd. We face an ethical crisis. Slopware means we have two possible responses: absolutism or harm reduction. This moment demands the same principled stand that free software absolutists have taken for decades.

brennan.day
Disney+ Just Embraced the Scroll, and That Should Worry Your Attention Span

Disney+ launches Verts, embracing TikTok-style scrolling. What this reveals about streaming's broken promises and your attention span.

The Daily Perspective

'As the higher brain functions of society fade, what remains gradually starts to look like an economic creature moving on instinct—a zombie, mostly brain-dead, reacting only to immediate stimuli and driven by an insatiable hunger. [...] Not the “new industrial revolution” promised by #AI leadership, but weak productivity, low-value workslop, and a massive bubble of unprofitability.'

https://brooklynrail.org/2026/02/field-notes/social-debraining/
#tech #techCriticism #labour #artificialIntelligence #SiliconValley

Social Debraining | The Brooklyn Rail

If America had a brain, we could say it’s losing cortical tissue. The neocortex—the part of our brain that helps us plan, learn, and think beyond the moment—balances long-term planning against short-term reflex.

#SpeakingOutOfPlace welcomes #CarissaVéliz, author of #Prophecy: Prediction, Power, and the Fight for the Future—from Ancient Oracles to #AI

"[W]e talk about how both massive and intrusive invasions of privacy at all levels of society and false claims to be able to predict the future erode democracy, are corrosive to #ethics, and undermine people’s ability to think for themselves."

https://speakingoutofplace.com/2026/02/19/bullshit-and-infinity-why-ai-cannot-predict-anything-a-conversation-with-carissa-veliz/

#tech #artificialIntelligence #techCriticism #surveillance #books @bookstodon

Bullshit and Infinity: Why AI Cannot Predict Anything: A Conversation with Carissa Véliz | Speaking Out OF Place

Crítica al alarmismo tecnológico: Análisis del discurso sobre el apocalipsis de la IA y su uso como herramienta de distracción frente a problemas estructurales de 2026. 🧠👾 🔗 https://www.glitchmental.com/2026/02/apocalipsis-ia-verdad-incomoda.html #AIEthics #TechCriticism #DigitalTrends #GlitchMentalMX
An ‘AI afterlife’ is now a real option – but what becomes of your legal status? | The-14

AI-driven digital afterlives raise urgent legal and ethical questions about consent, ownership, identity and responsibility after death in a growing grief tech industry.

The-14 Pictures

I’m working on a research paper examining platforms as infrastructural religions and am interested in how Peter Thiel’s recent Paris presentation on the Antichrist may intersect with that framework. I understand the slides were distributed to attendees, but I haven’t seen a public archive. If anyone knows whether copies are accessible for academic research, or if an official source is planned, I’d appreciate the pointer.

#DigitalReligion #PoliticalTheology #TechCriticism

The Absurd Wall Around Picture‑in‑Picture for Music on YouTube

There is something uniquely frustrating about running headfirst into a limitation that feels completely artificial. Not a technical constraint. Not a hardware shortcoming. Not even a genuine legal impossibility. Just a wall, quietly erected, that exists because someone decided it should. YouTube’s refusal to allow music content to run in Picture‑in‑Picture mode on iPhone and iPad is one of those walls. It stands there, immovable, while everything around it suggests that it should not […]

https://jaimedavid.blog/2026/01/17/00/01/40/analysis/jaimedavid327/9061/the-absurd-wall-around-picture-in-picture-for-music-on-youtube/

🛡️ The Quiet Revolution in AI Safety

The transformation is remarkable: AI safety evolved from philosophical thought experiments to engineering frameworks with nuclear-level precision.

Companies like Anthropic, OpenAI, and Microsoft now use concrete thresholds (100 deaths OR $1B damages) and treat model security like protecting launch codes.
Two critical insights:

The real threat isn't "evil AI"—it's AI empowering individuals with nation-state capabilities
Every safety measure is an admission that underlying models retain dangerous potential

Most telling: Companies must deliberately test AI with NO safety constraints to understand maximum risk.

🎧 Listen: https://www.buzzsprout.com/2405788/episodes/17902197

📖 Read: https://helioxpodcast.substack.com/publish/post/174544725

This isn't about preventing Skynet—it's about a species learning to coexist with its own creations.

#AISafety #TechEthics #AIGovernance #OpenSource #TechPolicy #CyberSecurity #DigitalRights #TechAccountability #AITransparency #TechCriticism

Meta 的產品就是奇葩,真的太垃圾了啊!還搞上地域歧視了?事情就是這樣,WhatsApp 上的 Meta AI 真的一言難盡,明明 Llama 模型的中文水平還不錯,非要屏蔽中文,當我用中文傳送訊息給 AI 的時候,AI 雖然會說中文但馬上就被系統屏蔽!你這是什麼意思?其它語言則不會,明明 Llama 的中文水平不錯,也沒什麼問題,非要故意屏蔽,見鬼去吧,不只是 WhatsApp,在 Instagram 等 Meta 的 app 上的 AI 都是會這樣!真的讓人氣死,而且 WhatsApp 上的 Meta AI 都有點 bug(在影片中能明顯的看得出來),要不是身邊人都用 WhatsApp,我也不想用這狗屁東西 💩 Meta 的產品真沒一個是好的,我個人認為真的全是垃圾,這就是為什麼我比較偏向開源,雖說還是會用閉源的東西但還是比較偏向開源的
@board #Meta #MetaAI #WhatsApp #Instagram #Facebook #AI #Llama #TechBias #AIBias #SocialMedia #BigTech #TechCriticism