Seeing more programmers take the stance of "the plagiarism machine works, so I guess we have accept it".

Contrast that with art and creative industries, where theft and plagiarism aren't new phenomenons. Unions keep these industries alive. And it's laughable to imagine telling an artist that they have to accept a plagiarist in their communities.

Easy to say "let's all be nice to each other" when you're well paid and haven't lost your job yet. Some tough lessons will be learned soon.

You know how you fight corporations? With unions. Picket lines. New contracts blackballing plagiarists. Older workers walking out to support their (financially insecure) younger peers. That's what past generations did to maintain the world we enjoy today.

You do not fight by saying "plagiarists are people too", not picketing, and not walking out.

When you take this stance, you're trying to sound nice, but you're naive -- you're collecting your paycheck and condemning the next generation.

And as an aside: who cares if the plagiarism machines are getting better? I should hope a plagiarist can produce a final product.

It's not a grand observation that theft and plagiarism will help you accomplish a task faster. it's not new information that stealing things is cheaper than making them.

I do not care if Claude produces perfect code. It doesn't, and it won't, but even if it did I would not use it. Because I'm not a fucking plagiarist. And you shouldn't be either.

The action item you can take away from this rant: call things like Claude or the like what they are: plagiarism machines.

That's not a dig -- that's a more accurate description than "artificial intelligence".

When your coworker argues that it's merely automation, be the nerd who corrects them with "automated plagiarism".

Normalize describing gen AI accurately.

I strongly believe if more people understood how it works they would not use it.

@protowlf Do you have a good source (ironically) that sets out the reasoning behind why we should understand what's going on inside one of these models while code is coming out as "plagiarism"?

They absolutely will spit out whole chunks of code that one can point to and say "that was copied from such-and-such without following the license".

But they also don't *always* do that. Are they *always* plagiarizing, or only sometimes?

@Forbearance @protowlf someone studied how well coding agents solve simple programming problems in esoteric languages like whitespace and brainfuck. the agents all fail at things like, "compute n factorial for n < 10". this seems to me a pretty strong indication that the coding agents don't have any significant "reasoning" ability, they mainly work by regurgitating semantically-similar-shaped things from their training data.
https://arxiv.org/abs/2603.09678
EsoLang-Bench: Evaluating Genuine Reasoning in Large Language Models via Esoteric Programming Languages

Large language models achieve near-ceiling performance on code generation benchmarks, yet these results increasingly reflect memorization rather than genuine reasoning. We introduce EsoLang-Bench, a benchmark using five esoteric programming languages (Brainfuck, Befunge-98, Whitespace, Unlambda, and Shakespeare) that lack benchmark gaming incentives due to their economic irrationality for pre-training. These languages require the same computational primitives as mainstream programming but have 1,000-100,000x fewer public repositories than Python (based on GitHub search counts). We evaluate five frontier models across five prompting strategies and find a dramatic capability gap: models achieving 85-95% on standard benchmarks score only 0-11% on equivalent esoteric tasks, with 0% accuracy beyond the Easy tier. Few-shot learning and self-reflection fail to improve performance, suggesting these techniques exploit training priors rather than enabling genuine learning. EsoLang-Bench provides the first benchmark designed to mimic human learning by acquiring new languages through documentation, interpreter feedback, and iterative experimentation, measuring transferable reasoning skills resistant to data contamination.

arXiv.org