So the problem I see with the "FDA for AI" model of regulation is that it posits that AI needs to be regulated *separately* from other things.

I fully agree that so-called "AI" systems shouldn't be deployed without some kind of certification process first. But that process should depend on what the system is for.

>>

Which is another way of saying: existing regulatory agencies should maintain their jurisdiction. And assert it, like the FTC (and here EEOC, CFPB and DOJ) are doing:

https://www.ftc.gov/news-events/news/press-releases/2023/04/ftc-chair-khan-officials-doj-cfpb-eeoc-release-joint-statement-ai

>>

FTC Chair Khan and Officials from DOJ, CFPB and EEOC Release Joint Statement on AI

FTC Chair Lina M.

Federal Trade Commission

Beyond that, we should be reasoning from identified harms to see how existing laws & regulations apply and where there may be gaps.

I am not a policymaker (nor a lawyer) but my sense of it is that the gaps largely come up in cases where (1) automation obfuscates accountability or (2) data collection creates new risks.

>>

Re (1), we should be asking (as I think many are): how to ensure that people have recourse if automated systems make decisions that are detrimental them --- and how to ensure that communities have recourse if patterns of decision create/worsen inequity.

(That last point follows from the value sensitive design principle of considering pervasiveness: what happens when the technology is used by many?)

>>

Re (2), I'm thinking of the kinds of risks that happen when data is amassed (risks to privacy, e.g. around deanonymization being possible after just a few data points are collected) and also risks connected to the ease of data collection.

>>

Sharing art online used to be low-risk to artists: freely available just meant many individual people could experience the art. And if someone found a piece they really liked and downloaded a copy (rather than always visiting its url), the economic harms were minimal.

But the story changes when tech bros mistake "free for me to enjoy" for "free for me to collect" and there is an economic incentive (at least in the form of VC interest) to churn out synthetic media based on those collections.

>>

A final kind of risk that might not be adequately handled by existing frameworks is the risks that widely available media synthesis machines pose to the information ecosystems.

Here, I keep hoping for some way to set up accountability: what if #OpenAI were actually accountable for everything #ChatGPT outputs? (And #Google for #Bard and #Microsoft for #BingGPT?)

Maybe we already have what we need, maybe there's something to add.

>>

But I strongly doubt that saying "AI" is so new it needs its own "FDA" is going to get us there. Let's sit with and use the power that existing regulations already give us for collective governance.

And not fall for either-

Myth #1: The tech is moving to fast! Regulation can't keep up.

Myth #2: The 'real' concern is rogue AGI that poses 'existential risk' to humanity.

@emilymbender

IMHO, the real concern isn't the technology itself, but the economic pressures from the investor class and capitalism.

(Longer take on that, especially regarding the arts):

https://ideatrash.net/2023/05/whether-ai-can-write-a-story-is-the-wrong-question.html

Whether AI Can Write A Story Is The Wrong Question.

No, an AI probably couldn't write what *you* write. Could an AI write some of the crappy formulaic (and sadly, profitable) media that's out there? Oh, yes, absolutely. That's a totally different question, and why I support the WGA.Read the postWhether AI Can Write A Story Is The Wrong Question.

@StevenSaus @emilymbender I'd add that their (false) narrative around the tech is also part of the problem...