OpenAI backs Illinois bill that would limit when AI labs can be held liable

https://archive.md/WzwBY

https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/

Quoting the original bill [0]:

> "Critical harm" means the death or serious injury of 100
or more people or at least $1,000,000,000 of damages to rights
in property caused or materially enabled by a frontier model,
through either:
(1) the creation or use of a chemical, biological,
radiological, or nuclear weapon; or
(2) engaging in conduct that:
(A) acts with no meaningful human intervention;
and
(B) would, if committed by a human, constitute a
criminal offense that requires intent, recklessness,
or negligence, or the solicitation or aiding and
abetting of such a crime.

I don't know what I expected from this title, but I was hoping it was more sensationalized. No need in this case unfortunately.

> (a) A developer shall not be held liable for critical
harms if the developer did not intentionally or recklessly
cause the critical harms and the developer:
(1) published a safety and security protocol on its
website that satisfies the requirements of Section 15 and
adhered to that safety and security protocol prior to the
release of the frontier model;
(2) published a transparency report on its website at
the time of the frontier model's release that satisfies
the requirements of Section 20.
The requirements of paragraphs (1) and (2) do not apply if
the developer does not reasonably foresee any material
difference between the frontier model's capabilities or risks
of critical harm and a frontier model that was previously
evaluated by the developer in a manner substantially similar
to this Act.

However or if one thinks regulation for this should be drafted, I doubt providing a PDF is what most have in mind.

[0] https://trackbill.com/bill/illinois-senate-bill-3444-ai-mode...

Illinois SB3444

Illinois SB3444 2025-2026 Creates the Artificial Intelligence Safety Act Provides that a developer of a frontier artificial intelligence model shall not be held liable for critical harms caused by the frontier model if the developer did not intentionally or recklessly cause the critical harms and the developer publishes a safety and security protocol and transparency report on its website Provides that a developer shall be deemed to have complied with these requirements if the developer 1 agrees to be bound by safety and security requirements adopted by the European Union or 2 enters into an agreement with an agency of the federal government that satisfies specified requirements Sets forth requirements for safety and security protocols and transparency reports Provides that the Act shall no longer apply if the federal government enacts a law or adopts regulations that establish overlapping requirements for developers of frontier models

I think my favorite part is that, because it only applies to "frontier models", if a smaller model is blamed for such harm, it seemingly doesn't immunize the creators at all. That makes very little sense unless you specifically want to make it illegal to not be OpenAI (et al).

Similarly, if a frontier model kills merely 99 people, they aren't covered by this. So go big or go home I guess?

> unless you specifically want to make it illegal to not be OpenAI [...]

If that is an "unintended" consequence, I am certain OpenAI wouldn't be opposed. Preventing competition whilst keeping any potentially profit risking regulations at bay has been a clear throughline in OAIs lobbying efforts.

> because it only applies to "frontier models", if a smaller model is blamed for such harm, it seemingly doesn't immunize the creators at all

Oof. If you're an Illinois resident, please call your elected and at least ensure they understand this loophole is there. In all likelihood, nobody other than OpenAI's lobbyists have noticed this.