Writing used to be proof-of-thought...

This how I feel about genAI: it's informational poison. If you read or look at it without knowing it is AI, you have already lost.

The solution they proposed is exactly the one I adopted while working on the CRA standards: if you send me AI output, tell me it is AI output and any due diligence you did on it. Then I can make a decision about whether to engage.

https://distantprovince.by/posts/its-rude-to-show-ai-output-to-people/

It's rude to show AI output to people | Alex Martsinovich

Feeding slop is an act of war

I have a horror of polluting my internal knowledge base with bad information. I literally do not use any genAI tool that generates plausible outputs. I'm only interested in specialized tools that create suggestions that can be verified by experts

@vaurora I've eaten my words on AI being useful as claude is impressive as fuck.

But it NEEDS an expert-in-the-loop or it's useless.

The big error all the bullshitters make is denying this and it really hurts the narrative of AI > all jobs

@ljs totally agree with all points
@ljs in particular, I think finding the "code smells" genre of bug is a perfect use case for an LLM - if you haven't deskilled or displaced the expert who can verify the output

@vaurora also you NEED juniors to become seniors who can assess.

So the whole barely-hidden hatred of programmers suits have had since the advent of computers is, as usual, thwarted.

Sorry you still need us, you'll always need us, also fuck you - you're the replaceable ones :)

@vaurora So I've spent a lot of time at this point experimenting with these things, and the plausible information rabbit hole is real and bad. I absolutely refuse to use any speculation that doesn't have a link to a primary source that I can read first. That is one way where Gemini is actually quite good, it quite consistently cites its sources.

(In some ways, it makes Google a good search engine again. I asked for the best prices in Canada for a certain mic and it consolidates the information much better than normal search results.)

@vaurora Difficult. One of the platforms I'm on recently adopted "automatic AI detection" of uploaded content, and ... that went about as well as someone who understands tech at all would expect.

Relying on a person to self-disclose something that'd harm them and that the recipient has no reliable way of detecting is tricky.

I see a tightening circle of trust - people whom we trust to responsibly (not to) use/disclose AI. The world at large? Done for.

>"Here's my PR, I did this and that for this and that reason."
>Entire PR is vibe coded
>Search online for where the LLM stole it from
>Code was taken from a closed PR from years ago with explanation from the maintainer why it was rejected
@vaurora @jaredwhite wait til you hear about vibe project management
@vaurora this. I have vowed to myself that I will never share AI output with anyone else. The few times I have, I’ve said that it’s AI. But I shall add the due diligence bit.