There's a host of legal risks AI companies and companies that use generative AI are putting themselves in the path of, that we don't talk about enough:

πŸ“œ It's pretty clear Section 230, the foundational law enabling today's internet, DOES NOT protect AI-generated content like that from ChatGPT, Claude or Google's generative search experience

πŸš—πŸ’₯πŸš™ Generative AI could also put companies at risk of product liability claims

My deep dive:

1/🧡

(gift link)

https://www.wsj.com/tech/ai/the-ai-industry-is-steaming-toward-a-legal-iceberg-5d9a6ac1?st=fzthflzxv4l5hgn&reflink=desktopwebshare_permalink

β€œIf in the coming years we wind up using AI the way most commentators expect, by leaning on it to outsource a lot of our content and judgment calls, I don’t think companies will be able to escape some form of liability.”

-- Jane Bambauer, professor of law at the University of Florida

She's written a whole paper on yet a *third* category of legal risk using generative AI could open companies up to, which I didn't even have space for:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4432822

2/🧡

β€œGenerative AI is the wild west when it comes to legal risk for internet technology companies, unlike any other time in the history of the internet since its inception.”

-- Graham Ryan, a litigator at Jones Walker who will soon be publishing a paper in the Harvard Journal of Law and Technology the legal risks of generative AI and why Section 230 doesn't protect companies that use it

3/🧡

@mimsical Counter: "using #AI the way most commentators expect" is already far from the most common use case today, and will be less and less in importance.

Section 230 doesn't apply to e.g. automated pipelines of internal documents and using #LLMs for it doesn't change that.

For all the media attention on content creation for public consumption, most #LLM use is very boring office work.

@erispoe @mimsical

This. Content generation is very visible and in the public mind at the moment. But a lot of the real utility is internal or in the back end.

New Hampshire House passes AI election rules after Biden deepfake

The New Hampshire state House advanced a bill Thursday that would require political ads that use deceptive artificial intelligence (AI) disclose use of the technology, adding to growing momentum in states to add AI regulations for election protection.  The bill passed without debate in the state House, and will advance to the state Senate. The…

The Hill

@mimsical

Humans do what LLMs do every day without even being consciously aware of it. Just about everything we write was influenced in some way by something that we read at some point. But we established long ago that this isn't usually copyright infringement unless it meets certain criteria[1]. But despite generative AI being somewhat inscrutable, it's still far more scrutable than the human mind. Additionally, case law applying to humans doesn't necessarily apply to AI. The potential legal liability is extremely high, and I'm not sure what reward is.

1. https://en.wikipedia.org/wiki/Paraphrasing_of_copyrighted_material#In_the_United_States

Paraphrasing of copyrighted material - Wikipedia

@mimsical And/or things like that Air Canada lawsuit. I expect many companies to stumble over their own versions of that in the next couple of years.

@mimsical

Generative AI is promising politicians that it will help them gaslight and subjugate their constituents. Palantir has already delivered on that promise at the CDC by providing the Biden Administration with the propaganda of used to get Americans to embrace mass death via SARS-CoV-2. Don't expect any help from the courts on this--politicians on both sides have already burnt their boats. Once you make a deal with the devil he owns you for life.

@noyes

@mimsical

Imagine iF this #AI strategy could be #audited.

Just make #AI auditable.

You use this for hiring decisions, summaries for corporate decisions, legal has to approve of this, so now the companies that use this have to stand behind this, right?

Just make AI content, flagged as AI Content.

Make it Opt-IN. MAKE #AIOptIn #AI

_NOT_ OPT-OUT .

Here is a policy mockup. πŸ‘‡

@mimsical I wanted to read the article but the nyt had a paywall. Bummer. So here's a link to the same article without a paywall.

https://archive.ph/ozPQg

@mimsical @dsilverman so, clearly to slow things down we encourage a bunch of young law school grads to find clients and start pumping out class action claims.