So the problem I see with the "FDA for AI" model of regulation is that it posits that AI needs to be regulated *separately* from other things.

I fully agree that so-called "AI" systems shouldn't be deployed without some kind of certification process first. But that process should depend on what the system is for.

>>

Which is another way of saying: existing regulatory agencies should maintain their jurisdiction. And assert it, like the FTC (and here EEOC, CFPB and DOJ) are doing:

https://www.ftc.gov/news-events/news/press-releases/2023/04/ftc-chair-khan-officials-doj-cfpb-eeoc-release-joint-statement-ai

>>

FTC Chair Khan and Officials from DOJ, CFPB and EEOC Release Joint Statement on AI

FTC Chair Lina M.

Federal Trade Commission

Beyond that, we should be reasoning from identified harms to see how existing laws & regulations apply and where there may be gaps.

I am not a policymaker (nor a lawyer) but my sense of it is that the gaps largely come up in cases where (1) automation obfuscates accountability or (2) data collection creates new risks.

>>

Re (1), we should be asking (as I think many are): how to ensure that people have recourse if automated systems make decisions that are detrimental them --- and how to ensure that communities have recourse if patterns of decision create/worsen inequity.

(That last point follows from the value sensitive design principle of considering pervasiveness: what happens when the technology is used by many?)

>>

Re (2), I'm thinking of the kinds of risks that happen when data is amassed (risks to privacy, e.g. around deanonymization being possible after just a few data points are collected) and also risks connected to the ease of data collection.

>>

Sharing art online used to be low-risk to artists: freely available just meant many individual people could experience the art. And if someone found a piece they really liked and downloaded a copy (rather than always visiting its url), the economic harms were minimal.

But the story changes when tech bros mistake "free for me to enjoy" for "free for me to collect" and there is an economic incentive (at least in the form of VC interest) to churn out synthetic media based on those collections.

>>

A final kind of risk that might not be adequately handled by existing frameworks is the risks that widely available media synthesis machines pose to the information ecosystems.

Here, I keep hoping for some way to set up accountability: what if #OpenAI were actually accountable for everything #ChatGPT outputs? (And #Google for #Bard and #Microsoft for #BingGPT?)

Maybe we already have what we need, maybe there's something to add.

>>

Cause of action - Overview and how to specify elements

A step-by-step guide and overview reference for both novices and any seasoned lawyers looking for a thorough review about cause of action.

Thomson Reuters Law Blog