OpenAI backs Illinois bill that would limit when AI labs can be held liable

https://archive.md/WzwBY

https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/

So they did the math and worked out it's cheaper and easier to lobby the government instead of working to make their product safe.

And these are the people that a lot programmers want to give the keys to the kingdom. Idiocracy really is in full effect.

> instead of working to make their product safe

Make a nondeterministic product safe how?

Is this the first time you have heard of AI safety?

Lots of articles you could read on the subject and answer your own question.

(Unless your angle is: akshually, you can never make anything 100% safe)

> akshually, you can never make anything 100% safe

Yes Sherlock. And especially a natural language product that can't output the same thing for unchanged input twice.

Besides when you say "safe" i think of the idiots at Anthropic deleting "the hell" when i pasted a string in Claude and asked "what the hell are those unprintable characters at the beginning and end"...

How many correct answers did they suppress in their quest to make their chatbot "family friendly"?