So the problem I see with the "FDA for AI" model of regulation is that it posits that AI needs to be regulated *separately* from other things.

I fully agree that so-called "AI" systems shouldn't be deployed without some kind of certification process first. But that process should depend on what the system is for.

>>

Which is another way of saying: existing regulatory agencies should maintain their jurisdiction. And assert it, like the FTC (and here EEOC, CFPB and DOJ) are doing:

https://www.ftc.gov/news-events/news/press-releases/2023/04/ftc-chair-khan-officials-doj-cfpb-eeoc-release-joint-statement-ai

>>

FTC Chair Khan and Officials from DOJ, CFPB and EEOC Release Joint Statement on AI

FTC Chair Lina M.

Federal Trade Commission

Beyond that, we should be reasoning from identified harms to see how existing laws & regulations apply and where there may be gaps.

I am not a policymaker (nor a lawyer) but my sense of it is that the gaps largely come up in cases where (1) automation obfuscates accountability or (2) data collection creates new risks.

>>

Re (1), we should be asking (as I think many are): how to ensure that people have recourse if automated systems make decisions that are detrimental them --- and how to ensure that communities have recourse if patterns of decision create/worsen inequity.

(That last point follows from the value sensitive design principle of considering pervasiveness: what happens when the technology is used by many?)

>>

Re (2), I'm thinking of the kinds of risks that happen when data is amassed (risks to privacy, e.g. around deanonymization being possible after just a few data points are collected) and also risks connected to the ease of data collection.

>>

Sharing art online used to be low-risk to artists: freely available just meant many individual people could experience the art. And if someone found a piece they really liked and downloaded a copy (rather than always visiting its url), the economic harms were minimal.

But the story changes when tech bros mistake "free for me to enjoy" for "free for me to collect" and there is an economic incentive (at least in the form of VC interest) to churn out synthetic media based on those collections.

>>

A final kind of risk that might not be adequately handled by existing frameworks is the risks that widely available media synthesis machines pose to the information ecosystems.

Here, I keep hoping for some way to set up accountability: what if #OpenAI were actually accountable for everything #ChatGPT outputs? (And #Google for #Bard and #Microsoft for #BingGPT?)

Maybe we already have what we need, maybe there's something to add.

>>

But I strongly doubt that saying "AI" is so new it needs its own "FDA" is going to get us there. Let's sit with and use the power that existing regulations already give us for collective governance.

And not fall for either-

Myth #1: The tech is moving to fast! Regulation can't keep up.

Myth #2: The 'real' concern is rogue AGI that poses 'existential risk' to humanity.

@emilymbender
IMO both those are valid rather than myths but that matters little.

Wrt regulation, it has certainly failed/is failing to control tech corporations so the question is why. It doesn't matter if the reason is speed of change or centralisation of power, both cases are likely lost.

We can agree that SA is disingenuous and should not be influencing mitigations, but I'm at a loss to see the politicians, regulators (or corpns) solving this.

I look to empowering individuals w p2p tech.

@emilymbender that would be great if the parts of the tech biz that want to make more billions with AI werent lobbying the living crap out of the government to look the other way

@emilymbender

IMHO, the real concern isn't the technology itself, but the economic pressures from the investor class and capitalism.

(Longer take on that, especially regarding the arts):

https://ideatrash.net/2023/05/whether-ai-can-write-a-story-is-the-wrong-question.html

Whether AI Can Write A Story Is The Wrong Question.

No, an AI probably couldn't write what *you* write. Could an AI write some of the crappy formulaic (and sadly, profitable) media that's out there? Oh, yes, absolutely. That's a totally different question, and why I support the WGA.Read the postWhether AI Can Write A Story Is The Wrong Question.

@StevenSaus @emilymbender I'd add that their (false) narrative around the tech is also part of the problem...
@emilymbender
It is strange. On the one hand these corporations claim, that "AI" will soon be everywhere and in everything. But on the other hand they want to convince everyone that it should be regulated by a different entity than the one that already regulates all (or at least most of) the stuff "AI" will be used for.
Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical D

The FDA has updated the list of AI/ML-enabled medical devices marketed in the United States as a resource to the public.

U.S. Food and Drug Administration

@emilymbender Have you seen Nathalie A. Smuha’s piece on governing AI’s societal harm? (Apologies if you have.) Very powerful ideas in there, drawing on environmental legislation to go beyond individual harms/ interests/ frameworks. Could be interesting for this whole debate?

https://policyreview.info/articles/analysis/beyond-individual-governing-ais-societal-harm

Beyond the individual: governing AI’s societal harm

In this article, I propose a distinction between individual harm, collective harm and societal harm caused by artificial intelligence (AI), and focus particularly on the latter. By listing examples and identifying concerns, I provide a conceptualisation of AI’s societal harm so as to better enable its identification and mitigation. Drawing on an analogy with environmental law, which also aims to protect an interest affecting society at large, I propose governance mechanisms that EU policymakers should consider to counter AI’s societal harm.

Internet Policy Review
Cause of action - Overview and how to specify elements

A step-by-step guide and overview reference for both novices and any seasoned lawyers looking for a thorough review about cause of action.

Thomson Reuters Law Blog
@emilymbender
don’t upload anything to Internet, that keeps you safe
@emilymbender
These are the same chuds who would see a tray of "FREE APPLES" , take the entire tray, and sell the apples.
@emilymbender One of the things that came up at the recent Financial Industry Forum on Artificial Intelligence (Canada) was that the protected status identifiers that companies (or third party auditors) would need in order to demonstrate that their models are not unjustly biased are prohibited from being collected in some jurisdictions. Where not prohibited, people are often (understandably) unwilling to provide it because of suspicions over what companies will do with that data.

@emilymbender

I really appreciate these thoughts about accountability and governance in relation to AI. I keep wondering how currently popular LLM AI technologies can possibly meet GDPR and California CPRA regulations. Right to be forgotten (right of deletion), right of correction, right to opt out of sharing, and so forth. I am not seeing any credible attempt to address this slice of regulatory issues.

Then, there are new "SEO" services popping up to help you manipulate AI results.

Also, I find many people treating ChatGPT and other AI engines in the same as any other Web2 application. AI is different. As you are dialoging with an AI engine it is learning about you and absorbing information you share with it. Much of this aspect is hidden in the fine print of the privacy statements, but it never just tells you in your interactive dialog that it is doing that and what it will do with the information. Much more caution is in order. And what a challenge for regulators!

Anyway, thanks again for raising this issue.