OpenAI backs Illinois bill that would limit when AI labs can be held liable
https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/
OpenAI backs Illinois bill that would limit when AI labs can be held liable
https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/
Quoting the original bill [0]:
> "Critical harm" means the death or serious injury of 100
or more people or at least $1,000,000,000 of damages to rights
in property caused or materially enabled by a frontier model,
through either:
(1) the creation or use of a chemical, biological,
radiological, or nuclear weapon; or
(2) engaging in conduct that:
(A) acts with no meaningful human intervention;
and
(B) would, if committed by a human, constitute a
criminal offense that requires intent, recklessness,
or negligence, or the solicitation or aiding and
abetting of such a crime.
I don't know what I expected from this title, but I was hoping it was more sensationalized. No need in this case unfortunately.
> (a) A developer shall not be held liable for critical
harms if the developer did not intentionally or recklessly
cause the critical harms and the developer:
(1) published a safety and security protocol on its
website that satisfies the requirements of Section 15 and
adhered to that safety and security protocol prior to the
release of the frontier model;
(2) published a transparency report on its website at
the time of the frontier model's release that satisfies
the requirements of Section 20.
The requirements of paragraphs (1) and (2) do not apply if
the developer does not reasonably foresee any material
difference between the frontier model's capabilities or risks
of critical harm and a frontier model that was previously
evaluated by the developer in a manner substantially similar
to this Act.
However or if one thinks regulation for this should be drafted, I doubt providing a PDF is what most have in mind.
[0] https://trackbill.com/bill/illinois-senate-bill-3444-ai-mode...

Illinois SB3444 2025-2026 Creates the Artificial Intelligence Safety Act Provides that a developer of a frontier artificial intelligence model shall not be held liable for critical harms caused by the frontier model if the developer did not intentionally or recklessly cause the critical harms and the developer publishes a safety and security protocol and transparency report on its website Provides that a developer shall be deemed to have complied with these requirements if the developer 1 agrees to be bound by safety and security requirements adopted by the European Union or 2 enters into an agreement with an agency of the federal government that satisfies specified requirements Sets forth requirements for safety and security protocols and transparency reports Provides that the Act shall no longer apply if the federal government enacts a law or adopts regulations that establish overlapping requirements for developers of frontier models
Shifting liabilities from corporations to the public coffer is what companies do. You'll often hear this described as "privatizing profits and socializing losses". Let me introduce you to the Price-Anderson Act of 1957 [1]. It's been repeatedly extended, most recently with the ADVANCE Act [2]. This limits liability for the nuclear power industry in a whole range of ways:
- It removes jurisdiction from state courts to the federal court. In recent weeks, the part of "states' rights" is doing similar to stop states regulating prediction markets, as an aside [3];
- All actions are consolidated into a single claim;
- That claim has an inflation-adjusted absolute limit, which is somewhere around $500 million (I'm not sure of the exact 2026 figure);
- Any damages beyond that are partially sharead by the industry and an industry self-funded insurance program;
- The industry as a whole has a total liability limit, also inflation-adjusted. I believe this is around $10 billion.
For context, the clean up from Fukushima is likely to take a century and the cost may well exceed $1 trillion for a single incident [4]. So if this happened in the US, the government would be on the hook for almost all of it.
So I have two points here:
1. If you oppose any effort to shift liability from AI companies to the government (as I do) with legislation such as this, how do you feel about the nuclear industry doing the exact same thing? and
2. Minor point but I noticed in searching for the latest details, Gemini made factual errors, stating that "the Act is set to expire in 2025" when it was extended in 2024 until 2045. Always check AI's work, people.
[1]: https://en.wikipedia.org/wiki/Price%E2%80%93Anderson_Nuclear...
[2]: https://en.wikipedia.org/wiki/ADVANCE_Act
[3]: https://www.pbs.org/newshour/politics/federal-government-sue...
[4]: https://cleantechnica.com/2019/04/16/fukushimas-final-costs-...