Elon Musk is seeking 134 billion USD in damages from OpenAI and Microsoft in his ongoing lawsuit, but wants any winnings to go to OpenAI non-profit arm. Musk also wants Sam Altman and Greg Brockman ousted and their equity transferred to the charity. https://gizmodo.com/musk-changes-openai-lawsuit-so-that-if-he-wins-the-134-billion-openais-nonprofit-gets-it-2000743663 #AIagent #AI #GenAI #AIGovernance
Musk Changes OpenAI Lawsuit So that If He Wins the $134 Billion, OpenAI’s Nonprofit Gets It

Musk also wants Sam Altman out of OpenAI.

Gizmodo

🚨 New Article - Time Without a Clock: Future-Admissibility as the Source of Temporal Direction

We argue that temporal direction does not require an external clock or a privileged first instant.

πŸ”—https://zenodo.org/records/18552401

#LLM #MedicalNLP #LegalTech #MedTech #AIethics #AIgovernance #cryptoreg
#healthcare #ArtificialIntelligence #NLP #aifutures #LawFedi #lawstodon
#tech #finance #business #agustinvstartari #medical #linguistics #ai #LRM
#ClinicalAAI

Time Without a Clock: Future-Admissibility as the Source of Temporal Direction

We argue that temporal direction does not require an external clock or a privileged first instant. Time is the parameter that maximizes joint predictability among coupled observables under a minimal admissibility set. Given reversible microdynamics, the admissibility set restricts the space of attainable histories and thereby induces asymmetric gradients in the present, selecting an arrow without invoking teleology. This operational criterion unifies thermodynamic and cosmological arrows as instances of the same constraint mechanism: when admissibility suppresses late-time macroscopic complexity, the two arrows coincide; when the constraint flattens, effective time symmetry is recovered. We extend the framework across domains by treating grammatical constraints as admissibility sets over sequences, yielding an operational notion of discourse directionality defined by the same predictability maximization. Three toy models, a coarse-grained gas, a coupled lattice, and an FRW sketch with bounded late-time curvature, illustrate the mechanism and delimit its empirical signatures.   Date: February 2026 DOI Primary archive: https://doi.org/10.5281/zenodo.18552401 Secondary archive: https://doi.org/10.6084/m9.figshare.31295488 SSRN: Pending assignment (ETA: Q1 2026)

Zenodo

π—”π—œ π˜„π—Άπ˜π—΅π—Όπ˜‚π˜ π˜π—΅π—² π—›π˜†π—½π—² (𝗼𝗿 𝗕𝗹𝗢𝗻𝗱 π—¦π—½π—Όπ˜π˜€) πŸ¦Ύβš–οΈ

For this week's review, Yisehak Lemma examines π™π™π™š π˜Όπ™„ π˜Ύπ™€π™£π™ͺ𝙣𝙙𝙧π™ͺ𝙒, written by the father-son duo of Caleb and Rex Briggs.

πŸ”Ž Full review: https://cybercanon.org/the-ai-conundrum/

#CybersecurityBooks #AISecurity #AIGovernance

AI spectrum: Co-pilot vs Autopilot

CO-PILOT: AI suggests, human decides
β€’ Legal, HR, brand, customer escalations

AUTOPILOT: AI decides and acts
β€’ Spam, alerts, replenishment, routing

Decide by function. Make it your governance framework.

#AI #EnterpriseAI #AIGovernance #dougortiz

The mall didn't fail because people stopped buying. It failed because we mistook infrastructure for retail. Synthetic data is at risk of the same mistake. https://hackernoon.com/why-the-mall-failed-and-what-it-teaches-us-about-synthetic-data #aigovernance
Why the Mall Failed: and What It Teaches Us About Synthetic Data | HackerNoon

The mall didn't fail because people stopped buying. It failed because we mistook infrastructure for retail. Synthetic data is at risk of the same mistake.

New piece: Constrained by Design -- Building AI Systems You Can Actually Defend.

Why n8n + Ollama looks like a complete solution but isn't. The difference between "we used AI" and "we can prove how AI was used."

For compliance advisory, auditability isn't overhead. It's the whole product.

https://open.substack.com/pub/sovereignauditor/p/constrained-by-design

#SovereignAuditor #AIGovernance #LegalTech #Compliance #DataProtection #DPA #GDPR #AuditTrail

Constrained by Design

Building AI Systems You Can Actually Defend

The Sovereign Auditor

Soft "alignment" isn't enough for the enterprise.

AI needs deterministic constraints.

Learn how Policy-as-Code and Kill Switches provide a safety envelope for autonomy.

https://www.sakurasky.com/white-papers/trustworthy-agentic-ai-blueprint/

#AIGovernance #InfoSec

OpenAI insiders don't trust CEO Sam Altman, according to a new investigation. The New Yorker report raises fresh questions about OpenAI's leadership as the company releases policy recommendations for the 'intelligence age.' https://arstechnica.com/tech-policy/2026/04/the-problem-is-sam-altman-openai-insiders-dont-trust-ceo/ #AIagent #AI #GenAI #AIGovernance
β€œThe problem is Sam Altman”: OpenAI insiders don’t trust CEO

OpenAI brainstorms ways AI can benefit humanity in effort to counter bad vibes.

Ars Technica
OpenAI has proposed taxes on AI profits, public wealth funds and expanded safety nets to address job loss and inequality as policymakers debate AI's economic impact. The company says superintelligence will be as disruptive as the Industrial Revolution and calls for a democratic process to shape the AI future. https://techcrunch.com/2026/04/06/openais-vision-for-the-ai-economy-public-wealth-funds-robot-taxes-and-a-four-day-work-week/ #AIagent #AI #GenAI #AIGovernance
OpenAI’s vision for the AI economy: public wealth funds, robot taxes, and a four-day workweek | TechCrunch

OpenAI proposes taxes on AI profits, public wealth funds, and expanded safety nets to address job loss and inequality, blending redistribution with capitalism as policymakers debate AI’s economic impact.

TechCrunch
OpenAI has published a policy paper outlining its vision for the 'Intelligence Age', proposing ideas including a public wealth fund, taxes on automated labour and a four-day work week. The company says superintelligence will be as disruptive as the Industrial Revolution and calls for a democratic process to shape the AI future. https://gizmodo.com/openai-releases-its-vague-vision-for-reorganizing-society-around-superintelligence-2000742906 #AIagent #AI #GenAI #AIGovernance
OpenAI Releases Its Vague Vision for Reorganizing Society Around Superintelligence

The company behind ChatGPT announced a set of policy recommendations for the AI era including taxes on automated labor and a public wealth fund.

Gizmodo