"but, what does 'LLM code' means??? and how would we police it???"

it... ain't that hard fam, we're not trying to solve the philosophical conundrum of sentience

occam's razor applies, the simplest explanation, it's "code that was generated by a large language model" and if that's not clear enough then nothing is, and words are meaningless

how do you police it? well how do you police people sending plagiarized code? do the same for LLM code, if the project policy says no LLM code, assume people to respected that unless proven otherwise

i'm so tired of this argument that basically just tries to derail from the actual topic

@navi „but, but when LLMs become so good that you can‘t tell?“

Well, let’s talk again then, ok?

@chris_evelyn imo it is irrelevant if you can tell or not

a person can always transform the output enough so that it's not obvious, but the point is about having a policy, and acting on best effort

there's always people trying to lie and take advantage of stuff, we do our best against that but, as folks say it, "don't let perfect be the enemy of good"
@navi I know, I was just reiterating a common argument against rejecting LLM slop
@navi
You police stolen code by finding the source of the code and comparing the submission to the source and then demonstrating that the submitter had access to that source code.

@navi already, most LLM code comes from "agents" that clearly brand and label themselves.

it'll stay this way, because LLMs don't scale down. you need an illegal amount of training data to make one so they can only be owned by companies. these companies will want most usage of their product to be branded and controlled, not composable and generic.

IMO the "LLM neutral" people believe LLMs will be generic as the personal computer. they bought the marketing that it's a technology, not a product.