I attended the hearing for the hapless lawyer who used ChatGPT in his filings and here's my report. A cautionary tale not just for lawyers but for journalists: not about the machine but about our responsibility.

https://medium.com/whither-news/chatgpt-goes-to-court-7e4a0261114f

@jeffjarvis

Ultimately it is very simple, man „the user“ is responsible for validating any information they use. Ask yourself every time: Is it true?

Tools are just what they are. Tools.

And for now AI is the Wild West, a technological gold rush. Rules and regulations needed. The EU DSA act a start.

@xs4me2 @jeffjarvis
Of course anyone using chatGPT for information should validate everything.

But more generally: tool makers can be held liable if they were negligent. I’m sure there are all sort of ways AI models could be harmful or dangerous.

@IanStuart @xs4me2
A typesetter and printing press can produce truth or lies. In the early days of print, they were the ones held liable--often beheaded, behanded, or burned. Later, the author became the responsible party; indeed, Foucault argues, that is the birth of the concept of the author. So who is responsible today for a machine that will do what it is told?

@jeffjarvis @IanStuart

An intricate problem indeed, and basically of all times. It is about the essence of truth and truth finding and validation of it.

Designers (in their flawed and human nature) are responsible and should be held accountable for the validation of their tools and algorithms.

Society needs to provide a framework (rules and regulations).

Current AI developments provide a huge challenge for the concept of truth, perception and manipulation of it... dangerous times...

@jeffjarvis @xs4me2 No doubt this is a question we, as a society, have to answer.

Who enabled it? Who wrote the code and (if applicable) how was it trained? Who financed it? What safe guards were put into place? Was the AI model sold (if so what warnings and expectations were communicated)? On who’s hardware is it running? Who enabled it? Is there a physical robot? What was asked of it?#AIethics #moralsAndArtificialIntelligence #morals #ArtificialIntelligenceEthics #ArtificialIntelligenceMorals

@IanStuart @jeffjarvis

Validation of truth is independent of tools even. It is a basic principle of science, law and journalism.

And yes, uncontrolled growth in the hands of commercial toolmakers can be harmful or dangerous.

They should be held accountable, and society should provide the rules and guidance where it allows the toolmakers to go. The EU DSA is a start of that as were the hearings in US senate...