So the problem I see with the "FDA for AI" model of regulation is that it posits that AI needs to be regulated *separately* from other things.

I fully agree that so-called "AI" systems shouldn't be deployed without some kind of certification process first. But that process should depend on what the system is for.

>>

Which is another way of saying: existing regulatory agencies should maintain their jurisdiction. And assert it, like the FTC (and here EEOC, CFPB and DOJ) are doing:

https://www.ftc.gov/news-events/news/press-releases/2023/04/ftc-chair-khan-officials-doj-cfpb-eeoc-release-joint-statement-ai

>>

FTC Chair Khan and Officials from DOJ, CFPB and EEOC Release Joint Statement on AI

FTC Chair Lina M.

Federal Trade Commission

@emilymbender

I really appreciate these thoughts about accountability and governance in relation to AI. I keep wondering how currently popular LLM AI technologies can possibly meet GDPR and California CPRA regulations. Right to be forgotten (right of deletion), right of correction, right to opt out of sharing, and so forth. I am not seeing any credible attempt to address this slice of regulatory issues.

Then, there are new "SEO" services popping up to help you manipulate AI results.

Also, I find many people treating ChatGPT and other AI engines in the same as any other Web2 application. AI is different. As you are dialoging with an AI engine it is learning about you and absorbing information you share with it. Much of this aspect is hidden in the fine print of the privacy statements, but it never just tells you in your interactive dialog that it is doing that and what it will do with the information. Much more caution is in order. And what a challenge for regulators!

Anyway, thanks again for raising this issue.