A federal law should be passed making AI firms fully responsible for any and all content disseminated from their generative AI systems. Period. No exceptions.
@lauren Yeah, and a law should be passed making Microsoft fully responsible for any and all content created with Microsoft Excel. Period. No exceptions.
@LouisIngenthron False comparison. Not even close.
@LouisIngenthron Excel is, for all practical purposes, a calculator. Users can see all input data and how that data was used to formulate results. This is not the case for generative AI. The full scope of sources used, how those sources were used, and virtually all other aspects of the system are a black box to users. The AI firms want to create new content and then disclaim responsibility for it. Unacceptable.
@lauren Tbf, I've used some excel spreadsheets that were pretty "black box" too.
But more importantly, the transparency of an algorithm has no bearing on the liability for speech resulting from its use. Nearly every video game is a black box. Should the publishers therefore become liable for user content (like online voice chat) as a result?
@LouisIngenthron The fundamental question is pretty simple. Let's say someone asks a generative AI system a question, it provides an inaccurate answer, and then someone is harmed or killed as a result of that answer. Who is responsible for that answer (which is original content created by that system) and the damage it caused? "Nobody" is not acceptable.

@lauren @[email protected] There is an interesting congressional report on the interplay between generative AI and section 230 of the CDA. It looks at some of these issues. Good to have in your reference library.

https://crsreports.congress.gov/product/pdf/LSB/LSB11097