Banning visas for non-compliant US tech execs under #DSA isn’t about free speech—it’s about enforcing EU rules on transparency and illegal content.

Accountability ≠ censorship.

#DigitalRegulation #TechAccountability #DSA #EU #BigTech

Predictive technology including algorithms and AI is quietly replacing human judgement in under resourced public services such as policing. In Spain these systems are used to assess the safety needs of women fleeing domestic violence. Too often they get it wrong. When automated risk scores replace lived experience and professional care the consequences are not abstract. They are real and sometimes fatal. This video is a stark reminder that technology without accountability can deepen harm rather than prevent it.

#ai #algorithms #publicservices #policing #domesticviolence #techaccountability #spain #humanjudgement

https://www.youtube.com/watch?v=zOjzcHC6RZg

AI replaced Spain's human judgment in domestic violence cases—with tragic results | ABC NEWS

YouTube
Data Journalism vs. Big Tech

PeerTube

Ex-Facebook Safety Exec: Testifies to Congress about Online Harms to Teenagers on Social Media

Arturo Bejar a former senior leader in charge of safety and care at Facebook (2009–2015) and later a consultant for Instagram (2019–2021), testified before US Congress that social media companies misrepresent and ignore the widespread harm, particularly to children.

In his testimony, Bejar reported that internal research indicated that 13% of 13–15 year-old users self-reported receiving unwanted sexual advances within a single seven-day period, a level of abuse he called "likely the largest-scale sexual harassment of teens to have ever happened".

He explained that since social media companies are data-guided and treat problems that aren't measured as if they "don't exist," most user distress goes unaddressed.

Bejar advocates for regulation—mandating companies to track and publicly report concrete safety metrics—which will push social media platforms to genuinely prioritize user protection over engagement and growth.

Read Arturo Bejar's full testimony here: https://www.judiciary.senate.gov/imo/media/doc/2023-11-07_-_testimony_-_bejar.pdf

#SocialMediaSafety #BigTech #TeenHarassment #ArturoBejar #CongressTestimony #ChildProtection #PlatformRegulation #TechAccountability

Australia’s national plan says existing laws are enough to regulate AI. This is false hope | The-14

Australia’s AI plan lacks enforceable safeguards, leaving people vulnerable as AI harms grow. Experts warn voluntary rules fail.

The-14 Pictures

Follow OSINT Intuit™ for hard-hitting truth and uncompromising analysis of Russia’s information war.

#UkraineWar #RussiaUkraineWar #NATOReady #EuropeanSecurity

#AIbias #LLMs #Grok #NarrativeControl #TechAccountability
------------------

𝐃𝐢𝐚𝐥𝐨𝐠𝐮𝐞 𝐟𝐫𝐨𝐦 𝐆𝐫𝐨𝐤 𝟒.𝟏 𝐬𝐡𝐨𝐰𝐧 𝐢𝐧 𝐭𝐡𝐞 𝐬𝐜𝐫𝐞𝐞𝐧 𝐜𝐥𝐢𝐩𝐬 𝐚𝐟𝐭𝐞𝐫 𝐈 𝐭𝐚𝐩𝐩𝐞𝐝 𝐨𝐧 "𝐏𝐫𝐨𝐟𝐢𝐥𝐞 𝐒𝐮𝐦𝐦𝐚𝐫𝐲":

Grok 4.1

"He was a bushranger driven by injustice"

@XXXX

Thought for 15s
9/17

The Hidden Cost of AI: Water Waste and Community Impact

Conversations about AI often overlook its environmental impact, specifically water consumption from local communities due to inefficient cooling systems. Closed-loop systems, similar to those used by hobbyists, are viable for data centers but ignored for cost-effectiveness. Sustainable practices are essential to protect ecosystems and communities from the industry's unchecked expansion.

https://dreamspacestudio.net/the-truth-about-ai-water-use-if-fishkeepers-can-do-closed-loop-cooling-billion-dollar-data-centers-can-too/

What if American workers decided the only ones entitled to be "too big to fail" are us – the ones whose labor built your platforms, whose attention became your product, whose data trained your models, and whose spending made you untouchable?
https://substack.com/@shaynej/note/c-184374857?r=5h4wg0&utm_source=notes-share-action&utm_medium=web

#AI #AIBubble #WorkersFirst #us #TechAccountability #TooBigToFail #Leadership

AI in journalism and democracy: Can we rely on it? | The-14

How generative AI reshapes journalism, challenges verification, and risks eroding democratic trust, demanding stronger transparency and accountable oversight.

The-14 Pictures

AI labs issue safety statements like scattered blog posts, but lack the structural discipline pharma learned decades ago: non-negotiable principles, repeatable proof, unified messaging. Trust isn't a press cycle—it's architecture. #AIGovernance #TechAccountability #AIEthics

https://www.implicator.ai/opinion-why-ai-labs-need-message-houses-not-just-safety-statements/