Nature issues its rules for use of LLMs in science papers:
1. ChatGPT can't be an author because it cannot be responsible.
2. Transparency.
Sensible.

https://www.nature.com/articles/d41586-023-00191-1

Tools such as ChatGPT threaten transparent science; here are our ground rules for their use

As researchers dive into the brave new world of advanced AI chatbots, publishers need to acknowledge their legitimate uses and lay down clear guidelines to avoid abuse.

@jeffjarvis Totally agree with this approach. Credit it as a tool used sure, but not as an author. Authors of science papers first do the research and then write it up

@jeffjarvis there is a transcript/screenshot of a ChatGPT session, where the user gets it to agree that 2+2=5 and others where it's "facts" are researched to be false.

My wife likes to point out that every line of a syllabus, in a policy, or on a notice has a story behind it.

With that in mind, this Nature rules change is chilling.

@GreatBigTable @jeffjarvis Did you mean that the circumstances requiring the rules change are chilling? I’d say the rules change itself is pretty much spot on!

@jeffjarvis
I think they need a zeroth rule... Validate the responses that ChatGPT gives you before all else.

IMNSHO that is the biggest issue with the use of ChatGPT right now, people are not verifying the responses to the questions they are asking. They are taking the results as fact.

Then they need a 3rd rule, go back and double check that the question you asked is really what you meant to ask. ChatGPT answers the question you ask, not the question you thought you asked.

@jeffjarvis Good first step! But the real risk in my opinion is lack of #critical #thinking /reasoning skills for any piece of information presented. This gets worse for ambiguous/complex topics (for those at remove from being experts in said topic). This has existed before #chatgpt and will exist after it. But perhaps with ubiquity for chatgpt etc, it may confuse many many more. #transparency is very important!