Writing used to be proof-of-thought...

This how I feel about genAI: it's informational poison. If you read or look at it without knowing it is AI, you have already lost.

The solution they proposed is exactly the one I adopted while working on the CRA standards: if you send me AI output, tell me it is AI output and any due diligence you did on it. Then I can make a decision about whether to engage.

https://distantprovince.by/posts/its-rude-to-show-ai-output-to-people/

It's rude to show AI output to people | Alex Martsinovich

Feeding slop is an act of war

I have a horror of polluting my internal knowledge base with bad information. I literally do not use any genAI tool that generates plausible outputs. I'm only interested in specialized tools that create suggestions that can be verified by experts

@vaurora I've eaten my words on AI being useful as claude is impressive as fuck.

But it NEEDS an expert-in-the-loop or it's useless.

The big error all the bullshitters make is denying this and it really hurts the narrative of AI > all jobs

@ljs totally agree with all points
@ljs in particular, I think finding the "code smells" genre of bug is a perfect use case for an LLM - if you haven't deskilled or displaced the expert who can verify the output

@vaurora also you NEED juniors to become seniors who can assess.

So the whole barely-hidden hatred of programmers suits have had since the advent of computers is, as usual, thwarted.

Sorry you still need us, you'll always need us, also fuck you - you're the replaceable ones :)