A federal law should be passed making AI firms fully responsible for any and all content disseminated from their generative AI systems. Period. No exceptions.
@lauren Yeah, and a law should be passed making Microsoft fully responsible for any and all content created with Microsoft Excel. Period. No exceptions.
@LouisIngenthron False comparison. Not even close.
@LouisIngenthron Excel is, for all practical purposes, a calculator. Users can see all input data and how that data was used to formulate results. This is not the case for generative AI. The full scope of sources used, how those sources were used, and virtually all other aspects of the system are a black box to users. The AI firms want to create new content and then disclaim responsibility for it. Unacceptable.
@lauren Tbf, I've used some excel spreadsheets that were pretty "black box" too.
But more importantly, the transparency of an algorithm has no bearing on the liability for speech resulting from its use. Nearly every video game is a black box. Should the publishers therefore become liable for user content (like online voice chat) as a result?
@LouisIngenthron Regarding your chat example, no, that would be pretty clearly covered by Section 230 since it does not involve original content per se.
@lauren So you believe that the core issue here is that user-prompted content is first-party speech, not third-party speech? Even though the user can ask the system to repeat them verbatim (as I demonstrated above)?
@LouisIngenthron The question isn't prompts, the question is facts. If a user asks a straightforward fact-based question and is given a direct answer that turns out to be wrong and does that user harm, who is responsible for that answer?
@lauren So long as the provider has a "this might be bullshit" disclaimer, they're not dishing out "facts", so the user is responsible for improperly treating it as such.
@LouisIngenthron I don't think that's going to work in the long run. Courts have routinely ruled that various kinds of corporate disclaimers are invalid in various circumstances (e.g. gross negligence). There's a whole new world of negligence in these AI systems.
@lauren See, I disagree with that. The negligence is in the user treating an entertainment system like a fact machine. It's every bit as negligent as only getting your news from a comedy program, or consulting Reddit for legal advice.
@LouisIngenthron The difference is that Google for many, many years has built a reputation as a source for finding accurate information. NOT as a comedian. THIS MATTERS.