Wikipedia has banned AI-generated text, with two exceptions

https://infosec.pub/post/43865778

Wikipedia has banned AI-generated text, with two exceptions - Infosec.Pub

Lemmy

Saved you a click:

After much debate, the new policy is in effect: Wikipedia authors are not allowed to use LLMs for generating or rewriting article content. There are two primary exceptions, though.

First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy. In other words, it’s being treated like any other grammar checker or writing assistance tool. The policy says, “ LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”

The second exemption for LLMs is with translation assistance. Editors can use AI tools for the first pass at translating text, but they still need to be fluent enough in both languages to catch errors. As with regular writing refinements, anyone using LLMs also has to check that incorrect information hasn’t been injected.

AIbros: we’re creating God!!!

AI users: it can do translation & reformating pretty well but you got to check it’s not chatting shit

The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they’re asking anyway. All output needs to be verified before being used or relied upon.

The “AI” is just streamlining the process to save time.

Relying on it otherwise is stupid and just proves instantly that you are incompetent.

the user needs to be smart enough to do whatever they’re asking anyway

I’m gonna say that’s ideal but not quite necessary. What’s needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It’s an easier skill to verify a result than it is to obtain that result. Think: how film critics don’t necessarily need to be film*makers*, or the P=NP question in computer science.

But if the output has issues, what’re you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI’s mistakes yourself.
At the risk of sounding like an overly obsequious AI… You know what, you’re completely right. I’m honestly not sure what use case I was imagining when I wrote that last comment.

Making text flow naturally, grouping and ordeeing information, good writing.

You can verify two textst have the same facts and information, yet one reads way better than the other. But writing a text that reads well is quite hard.

You were thinking logically about a normal production chain. In that case, QA or whoever says “This is wrong, rework it and correct the issue” and that’s that. With AI, it does the whole thing over again and may or may not come back with the same issue or an entirely new one.
If you don’t habe the ability then you would do what you would have 5 years ago: not do it
Either submit without, or not submit at all.
I can’t draw, but I could probably photoshop out some minor issues in an AI-generated image.
If you’re unable to brute-force verification (research, testing, consulting the ancient texts), there’s where you stop what you’re doing, and take a breath. Then, consult an expert. Just like the film critic analogy, it’s easier to verify than to create, so you’re saving the expert time and effort while learning about something that you were obviously already passionate enough about to have started this endeavor.
As someone who codes, it’s not always easier to verify than to create.

As someone who codes, I specifically didn’t say “always” because of course it’s not always true. Especially in the cases of “garbage in, garbage out.”

But there’s still an argument to be made for mental load and context, for which I’d argue that planning solutions and then writing the code generally is more taxing than someone handing you suggested solutions with semi-complete code or pseudo-code, and then identifying road blocks.

On the other hand, if someone you trust unexpectedly hands you hallucinated garbage, then you’re likely to spin your wheels trying to identify what they did.

This is where domain expertise would come in, no? It’s speeding up the work but it usually outputs generic content, and whatever else it injects while hallucinating. Therefore the validation part holds up I’d say.

Relying on it otherwise is stupid and just proves instantly that you are incompetent.

Relying on it in any circumstances (though medical stuff is understandable if you’re simply too poor or don’t have access) while it is exhausting water supplies and polluting the planet is stupid and instantly proves that you are stupid and inconsiderate.

This is absolutely the case, and honestly, at least for now how it needs to be across the board.

Noone should be using AI to do things you’re incapable of doing (or undoing).

Fucking hate those anti human filth pushing slop into everything. I want to take one apart with power tools.
Damn that movie was funny. I need to rewatch it.
It holds up better than any movie from the late 90s that I can think of.
Yaaah, but I’ll need you to come in this weekend though. Yaaaahhhh…
I don’t think AI users would say it does reformatting either (if they’re honest): If you tell a chatbot to reformat text without changing it, it will change the text, because it does not understand the concept of not changing text. It should only take one time for someone to get burned for them to learn that lesson.