German Chancellor Merz pushes for separate EU rules for industrial AI before the AI Act's August deadline. His argument: factory automation systems shouldn't face the same regulations as consumer chatbots. The timing matters as Germany targets 4x more AI processing capacity by 2030. Key question remains how to balance lighter rules with worker safety protections. #EUAIAct #IndustrialAI #AIRegulation https://www.implicator.ai/merz-seeks-industrial-ai-carve-out-before-eu-rules-bite-in-august/
Merz Seeks Industrial AI Carve-Out Before August

Merz wants a factory-floor lane for industrial AI before the EU AI Act's August deadline. Germany's fourfold compute target, Hannover Messe demos, and worker-safety concerns make the request narrower than blanket deregulation and harder than another Brussels paperwork fight.

Implicator.ai

"Perhaps in response to the growing unease, A.I. companies have lately been undertaking various other efforts to appear more high-minded. Following the lead of Anthropic, Google DeepMind recently hired an in-house philosopher, and Anthropic convened a meeting of Christian leaders to discuss its chatbot’s moral orientation. A more effective strategy might be for A.I. executives to stop appointing themselves as the only arbiters of safety, to stop asking for blind faith, and to start fostering a system of external accountability, with input and involvement from the public. Tech companies proposing ways to reshape the government is a soft form of techno-fascism that alienates citizens; if A.I. requires a new social contract or a new political hierarchy, then its shape should not be up to the corporations to determine. There is another troubling paradox behind A.I. founders’ messaging: If the technology is as formidable as they claim, then they could be leading us toward existential disaster; if the technology proves less transformative, and thus less valuable than the hype suggests, then they are merely setting us up for global economic disaster. For those of us who aren’t self-appointed heroes of the artificial-intelligence movement, neither scenario is particularly appealing."

https://www.newyorker.com/culture/infinite-scroll/ai-has-a-message-problem-of-its-own-making

#AI #GenerativeAI #OpenAI #Technofascism #Anthropic #AIRegulation

A.I. Has a Message Problem of Its Own Making

Kyle Chayka writes about the social pushback—seen in attacks to OpenAI C.E.O. Sam Altman’s home—against A.I.’s ungoverned arms race.

The New Yorker
AI ethics guidelines are widely accepted, but enforcement remains weak, leading to a yawning accountability gap. Without meaningful compliance, vulnerable communities bear the brunt of systemic harm. Real change demands proactive, resource-backed governance.
Discover more at https://smarterarticles.co.uk/consensus-without-consequence-the-collapse-of-ai-accountability?pk_campaign=rss-feed
#HumanInTheLoop #AIethics #AIregulation #Accountability
Consensus Without Consequence: The Collapse of AI Accountability

Everyone agrees that artificial intelligence should be fair, transparent, and accountable. That sentence could have been written in 201...

SmarterArticles

@mnl
My ignorance is load-bearing. Please respect it.
I have prepared a block and a weak insult, in that order. The insult will involve "techbro" and possibly "slop." It will not be funny but it will be righteous, and that is the same thing

My credentials: I used ChatGPT once in 2024, counted the Rs in "strawberry," and have been seething ever since. You become an expert in a subject by hating it. Everyone knows this

My anti-datacentre blog is hosted in a datacentre. It's one of the ethical ones, because it runs me

I boost every preprint titled "Cognitive Collapse: How LLMs Are Hollowing Out The Something" from the Institute for Studies at the University of I Didn't Check. I have not read them. Reading is what the models do to us. I am resisting

The environmental cost keeps me up at night, specifically on flights, where I have time to think.
I refuse to engage with actual AI regulation because the boosters call regulation-curious people traitors, and I will not be caught agreeing with a booster even by accident

Do not reply. I am not interested

#principledIncuriosity #aislop #aiboosters #airegulation #llm #ai #blocked

@parkermolloy.com

I'm a big advocate for #Airegulation, something many folks new to the subject often confuse with #aiboosting

The regulatory solution to aifakes is "media provenance" mandatjng recording on the image via a watermark and metadata the source of the media.

The tech is there, we record map coordinates with images.

Just need legislation backbone.
Unfortunately this solution only benefits the #consumer so everyone is opposed to it.

H - Human creator only
P - Media edited with #Ai
G - Media #AiGenerated

"The previously undisclosed board of an ascendant DC think tank pushing Democrats to the center includes a Democratic Party megadonor with a stake in artificial intelligence chip designer Nvidia, which could benefit from the organization’s efforts to defend data center build-outs and limit AI regulation.

The Lever exclusively reports that philanthropist Simone Coxe — whose multibillion-dollar fortune largely comes from her venture capitalist husband’s investments in Nvidia — sits on the board of the Searchlight Institute, a new center-left DC think tank. Nvidia is heavily invested in data center expansion.

Coxe, also the cofounder of and director at California news organization CalMatters and a prolific Democratic donor, is listed as a director of the Searchlight Institute in the organization’s DC incorporation documents. The think tank has not disclosed its association with Coxe or her husband, Tench Coxe.

Searchlight’s board of directors has not been previously reported. It includes other rich and powerful investors with a stake in the AI build-out, such as billionaire hedge fund manager Stephen Mandel. Mandel’s investment firm Lone Pine Capital is heavily invested in the Taiwanese Semiconductor Manufacturing Company, the largest manufacturer of chips designed by Nvidia."

https://jacobin.com/2026/04/democrats-coxe-ai-searchlight-nvidia

#USA #AI #AIRegulation #ThinkTanks #SearchLightInstitute #Nvidia

The “Moderate” Think Tank Pushing Dems to Loosen AI Rules

Ascendant DC think tank Searchlight Institute, pushing Democrats to the center, has ties to megadonor Simone Coxe, whose Nvidia-linked money could boost AI-backed efforts to defend data center build-outs and limit AI regulation.

"OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.

The effort seems to mark a shift in OpenAI’s legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology’s harms. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.

The bill, SB 3444, would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs like OpenAI, Google, xAI, Anthropic, and Meta."

https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/

#USA #Illinois #AI #GenerativeAI #OpenAI #AIRegulation #AISafety

OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

The ChatGPT-maker testified in favor of an Illinois bill that would limit when AI labs can be held liable—even in cases where their products cause “critical harm.”

WIRED

AI regulation is becoming a major US midterm battleground. Anti-regulation groups plan to spend $225m and pro-regulation groups are raising $75m. The fight over AI guardrails is escalating fast👇

#AIRegulation #USMidterms #TechPolicy

https://www.blueprintforfreespeech.net/en/news/ai-regulation-becomes-key-battleground-for-us-midterms

AI regulation becomes key battleground for US midterms — Blueprint for Free Speech

AI regulation is becoming a major US midterm battleground, with tech-linked groups raising hundreds of millions of dollars to back candidates for or against guardrails — as companies, donors and political allies compete to shape future AI rules

Blueprint for Free Speech

Florida's AG launched an investigation into OpenAI for safety failures the same week a federal court upheld Anthropic's Pentagon blacklisting for having safety guardrails. Meanwhile, OpenAI's own Pentagon contract includes similar restrictions but with different framing. With 250 state AI bills across 40+ states and zero federal laws, companies face contradictory signals where safety appears both mandatory and prohibited. #AIPolicy #AIRegulation #TechPolicy

https://www.implicator.ai/the-pentagon-punished-ai-safety-a-republican-state-just-demanded-it/

Pentagon Punished AI Safety While Florida Demanded It

250 — State AI bills introduced in 2025 across 40-plus states while Congress enacted zero federal AI laws

Implicator.ai