An AI slop site I encounter a lot when searching is DeepWiki. It scans code and builds "documentation" wiki pages of open source code projects.

It also has a page on @novelwriter. The overarching code structure part is fairly good, and mostly correct but with a few consistently wrong hangups. But the usage section, which is based on my detailed and extensive documentation, is wrong or misleading on a lot of pretty important and basic stuff.

Useless. Just read the docs.

#AI #Slop #DeepWiki

Of course, like all AI slop, it is incredibly verbose and very generic with a low useful info to noise ratio (when it is correct). When I search for info on issues or have technical questions, I have never yet found anything useful there. Even Reddit is more helpful!

I have now blocked it from my search results. I hope users of novelWriter don't rely on it for user documentation.

I was having a discussion with someone at work last week who commented on how wrong AI often is when they ask it on a topic they already know well. So my colleague doesn't trust it. Checking the DeepWiki for my own app reveals the same is absolutely true for generated documentation.

And just today someone at work suggested we could just use AI for generating docs. It's gonna be a hard no from me at least.

#AI #Slop

@veronica

yes I can confirm the exact same thing.

If I am total rookie at something, I can appear to gain something from it, but the returns diminish very very quickly. Then I begin to even doubt whether original info, that appeared plausible, was actually correct.

@oschonrock Me and another colleague were making a quiz for work social a few weeks back, and she used Google to look up some info on some questions we wanted. Good thing I checked the suggested answers. They were kindof right, but absolutely not correct enough for a quiz.

@veronica

We agree.

What I find concerning:

- the level of wasted investment, misallocated resources, and incorrect corporate decisions made based on the belief that this technology is much better than it probably is.

- the externalities, such as the decreased ability to find actually original, high-quality content which we used for traditional research before 2022. (cf search engine results etc).

@oschonrock I too remember 2022, or as it shall now be know: Year 1 Before Slop.

@veronica

👍

I forgot another concern...

the ethics....

@veronica I was testing a llm, to see the quality of output code, and my impression is that a quarter or more of the lines could be removed. As the companies rent it it by tokens, it's a very good strategy produce very verbose and obtuse code...

@cochise I've spent a lot of time lately cleaning up AI generated code that is convoluted, fragile, extremely inefficient, and verbose. I've given up on it and will now rewrite the entire feature from scratch.

Time saver my ass.

@veronica
It only works reliably, if you know the answer to the question! Even then, you have to coax it toward accuracy, as it invariably gets it wrong first time.