I attended the hearing for the hapless lawyer who used ChatGPT in his filings and here's my report. A cautionary tale not just for lawyers but for journalists: not about the machine but about our responsibility.

https://medium.com/whither-news/chatgpt-goes-to-court-7e4a0261114f

@jeffjarvis

Ultimately it is very simple, man „the user“ is responsible for validating any information they use. Ask yourself every time: Is it true?

Tools are just what they are. Tools.

And for now AI is the Wild West, a technological gold rush. Rules and regulations needed. The EU DSA act a start.

@xs4me2 @jeffjarvis
Of course anyone using chatGPT for information should validate everything.

But more generally: tool makers can be held liable if they were negligent. I’m sure there are all sort of ways AI models could be harmful or dangerous.

@IanStuart @jeffjarvis

Validation of truth is independent of tools even. It is a basic principle of science, law and journalism.

And yes, uncontrolled growth in the hands of commercial toolmakers can be harmful or dangerous.

They should be held accountable, and society should provide the rules and guidance where it allows the toolmakers to go. The EU DSA is a start of that as were the hearings in US senate...